Stai usando un browser non supportato. Alcune caratteristiche potrebbero non funzionare correttamente. Per favore controlla il nostro elenco di browser supportati per una migliore esperienza. Ignora

Torchlight 3

State of the Game: Week 3

Di SoFech | lun 06 lug 2020 20:20:58 PDT

Check out our Discord, Facebook, Twitter, and Twitch to get the latest development news and updates.

 

 

During Early Access we want players to get up-to-the-moment information about what's going on with the game & dev team focus. This week's State of the Game letter comes from Guy Somberg, Lead Programmer of Echtra.

Sometimes the best possible way to look to the future of a project, is to understand its past. While next week’s State of the Game will take a deeper dive into the upcoming updates, ongoing issues, and other community feedback, this State of the Game is for those who enjoy a good technical read and want a postmortem on our Early Access weekend launch. This was written shortly after the Early Access launch weekend in order to retain as many details as possible.


Introduction

Wow, what an exciting weekend it’s been.  On Saturday morning, we launched Torchlight 3 for early access on Steam, after having been in alpha and then beta for 15 months.  After a year of having live players from all over the world playing our game, we had developed a lot of technology and procedures for deploying changes, fixing issues, investigating problems, triaging bugs, and the other fine-grained minutiae of running a live service.  The game servers and game clients had been running fairly smoothly, so we felt prepared to open up the floodgates.

 

Life likes to throw curve balls, and we had a ton thrown at us over the course of this past weekend.  Our team was up through the weekend chasing down problems, and now that things have settled down and our service is in better shape, it’s worth taking a look back.  So, let’s go on a journey of the release and see what happened and how we fixed it.

 

Torchlight 3 Launches

Our live team got together early on Saturday morning.  Or, at least, early for game developers.  At 10am, we had our build ready to go, and being white-gloved by QA.  A white-glove test is a test that QA does on a build when there are no other players on it to “check for dust” - that is, to make sure that there are no surprise issues.  That process was complete, and at 10:30am - with no fanfare - the game was available for purchase on Steam.

 

Just after 11am, Max went on the PC Gaming Show to talk about Torchlight 3, and told people that it’s live now.  So much for no fanfare.  Thus opened the floodgates, and people started playing.  For a little while, things seemed to go well.  Our concurrent users (CCU) numbers went from zero up to a few thousand in a very short timeframe.

 

Very quickly, though, we started to get reports of players disconnecting, failing to log on, and failing to travel across zones.  The team jumped into action like a cheetah on a pogo stick.  We determined that there were likely two root causes here: service scaling and server reaping.

 

Most of our back-end services are horizontally scalable.  That means that, if one of the services is under high load, then we can bring up more instances of it to handle the load.  We did exactly that to handle one of the problems.  The fact that the load was so high was indicative of other problems, but increasing the resources allotted to those services helped resolve those issues while we investigated the root causes.

 

The other problem was server reaping.  For some reason, the services that were monitoring our game servers were not getting informed that those servers had players on them.  We tracked down the connection issue and the disconnects stopped.

 

The Next Step: Quests and Characters

With these two problems resolved, we now had a more stable service, and players started running into a new set of problems.  This time, the problems manifested as quest progress taking a very long time to appear (if at all).  You could still play, but the game would be subtly broken.  Around the same time, we were getting reports of players being unable to create characters.

 

We tracked the quest problem to a database configuration issue that was causing the responses from the database to the backend service to be slow.  Unfortunately, in order to deploy those changes, we needed to take the game down briefly.  Our expectation was that it would be 10 minutes, so we told people to expect 20 minutes of downtime.  30 minutes after we started the downtime, we were finally able to bring up the servers.

 

The other issue that we were trying to track down was why some people were unable to create characters.  Fortunately, this one was easy.  Log files from the affected players all indicated that these people were attempting to name their pets using Cyrillic characters, which is disallowed by the services, but accidentally allowed by the game client.  We will fix this one in the client, but in the meantime we just informed these players about this restriction.

 

We’re All Good Now, Right?

There were still some disconnects and other issues, but toward the end of the day we felt as though we were in good enough shape that we could all go to sleep.  We had some inklings that there were still more issues, but we only had a few shots in the dark to try to fix them before bed.

 

Of course, the world is round and Europe and Asia were just about to get into their respective peak times.  Several of our team members were up through the night watching the EU and Asia servers go bad.  People didn’t seem to be crashing out, but they did seem to continually be disconnected.

 

The Next Morning

Our first impulse on having load issues is to scale up the services, which we did as quickly as we could.  We were bringing up servers in our data centers as well as in our cloud providers.  Unfortunately, the cloud provider has a rate limit on how frequently they’re willing to grant IP addresses - something like 2 per minute - so that severely limited how frequently we could bring up the extra load.

 

Early Sunday morning, then, one of our devops engineers said - in a very polite and timid fashion - “You know, it seems like we’re trying to run in a way that is not sustainable.  This whole thing with loading up 20 zones per player just isn’t going to work long-term.”

 

Wait, WHAT?  20 zones per player?  That’s not how our game is supposed to work at all!  Our game servers are a piece of hardware which runs a whole bunch of processes that we call Zones (or sometimes game servers).  Each zone runs the game world for a collection of areas - for example, Edgewood Bluff, Heroes’ Rest, and the Den of Upheaval will all be hosted by a single zone process.  How many zones do we expect to have per player?  Well, we’ve got the collection of zones that you’re in, maybe the last zone you just came from, maybe a town, and maybe your fort.  That’s a maximum of 4, but we should be averaging somewhere around 1.2 to 1.5 zones per player.  How did we end up with 20 zones?  That’s insanity!  Something is wrong with what we’re measuring or doing, but that is not right!

 

Zombie Zones

Now that we knew that we had a problem, we inspected the processes on the servers that were running and saw some defunct processes - colloquially called “zombie processes”, which gave rise to the nickname “zombie zones”.  A “defunct” process in Linux is a process which has been killed, but is still lingering.  It will still hold on to some minor resources such as file handles and such, but it is officially dead - but not gone.  Thus: zombies.

 

Although we did not realize it at the time that we found the zombies, we later discovered these zombie processes were actually counting toward the capacity of the servers.  We would ask a server if it had more capacity to spin up a new zone, and it would say “no, I’m full up”.  But it was full of zombies, not actual zones.

 

At this point, we had to do two things:

 

  1. We had to kill all of the zombies.
  2. We had to figure out why the zombies were appearing in the first place.

 

In order to kill the zombies for now, we actually just rebooted each of the servers in order to free up the capacity.  Players could then start taking up that space, but we knew that the zombies would be coming back, so we needed a fix for the underlying issue.  It was like playing Whack-a-Mole - as soon as we had cleaned up one set of servers, another one would get infested with zombies again.

 

Background

At this point, we’re going to get a bit more technical.  We’ve already discussed the zone servers, but there is also one single other process that we call the Zone Controller (ZC) whose job it is to spin up and monitor zone processes.

 

So a message comes in (say from a player who wants to travel) to the ZC, and it spins up a new zone process and responds with instructions about how to travel to that zone.  The ZC also has a few other tasks.  It monitors the zones that it has spun up, and if it sees one that it thinks isn’t responding (due to a crash or hang), it kills it.  It also reports back a lot of information such as what zones it’s got, how much capacity it has, and what players are in what zones to another server called Zone Lookup (ZL).  (Zone Lookup is another character in our story, which will come up soon.)

 

More Background

Upon examining the Zone Controller logs, we realized that it was sending 410 HTTP status codes to the zone servers.  When a zone server checks in with the ZC, the ZC can examine the server’s state and respond with a message indicating that the zone server should shut itself down.  It would do this, for example, if it has observed that there are no players in that zone for a certain length of time.

 

However, this 410 code is actually a vestigial piece of technology right now, because of a different piece of tech that we implemented.  When all of the players leave a zone server, the ZC will actually put the process to sleep using a Linux system call.  In this mode, the process is not even scheduled by the operating system, so it consumes zero CPU cycles.  We wake the process up if somebody travels back there (like if you portaled to town or to your fort and then returned), but if you never return then we never wake the process up.  After the timeout occurs, we kill the process entirely.

 

That makes it very surprising to see these 410 codes showing up.  Under normal operation, we should never see the 410s being triggered.  But we were seeing a lot of them.  Like, A LOT.  On an individual ZC that manages 100 to 200 zones, we would see one 410 every 1 to 2 seconds.

 

So, that was a thing.  And, of course, we’re building up these zombie zones at the same time.

 

Moving Toward a Fix

Both the zombie zones and the 410s seemed to be related, and both of these problems were pointing at the ZC.  We started to dig into the code of the ZC to see what the problem could be.

 

The ZC stores a map containing information about all of the zones.  This map gets locked as a whole entity any time changes are made to it, and we were suspicious that there was some contention over the lock.  The zone server sends messages to the ZC at a regular rate containing information about itself and its player counts, and the ZC updates this map in response to those messages.  The 410 codes could be explained if the ZC was missing out on these update messages or not processing them fast enough, and then the piece of code that checks the map coming in and noticing that it hasn’t heard from the zone server and killing it, even though there was an update message in the queue that it just hadn’t processed yet.

 

We still didn’t know why this was happening, but we were fairly confident that this is what was happening.

 

An Unrelated Problem? Zone Server Spin-up Time

Another issue that we noticed is that new zones were taking a long time to spin up.  Ordinarily, it should take about 10-15 seconds, but new zones were taking significantly longer to spin up for some reason.  This issue seemed to be unrelated to the other issues.  When it rains it pours.

 

We traced this issue to the ZC reaching out to the content service.  When starting up a new zone, the ZC gets information about which content version to use.  These are things like our spreadsheets, quests, recipes, etc.  Ordinarily, it requests this value every 5 minutes.

 

However, the code was written such that every new zone that was trying to spin up while it was requesting the new content version would block waiting for that answer.  So, if it was slow to get the answer - and we think it was for a while - then new zones wouldn’t come up for a long time because they were waiting for the content version.

 

This was where we started to realize that if the upstream services like Zone Lookup and the content service were lagging, then they were causing blocks in the ZC.  While we started investigating those issues, we rewrote how the ZC was getting the content version so that we store a cached value and reuse it until the new answer comes back.

 

Blocking Operations While Locking

Now that we saw that there were issues with the locks and communicating with other services, we examined whether there was anything that could be holding onto the lock for long periods of time.  The first thing we observed was that we had some situations where we would take a lock on the map of zones, and then perform some operating system calls, which could cause the process to be unscheduled, and can potentially be long-running.  We rewrote those sections of code such that the OS calls happened outside of the lock.

 

Now we had two fixes to try out: the content service cached value, and the OS calls outside of the locks.  We put both of these fixes up onto a single server and monitored that one server to see whether or not the situation had improved.  After a few minutes, it was clear that the situation was a little better - a 410 code every two seconds instead of two times every second, and we were still seeing zombie servers - but it was still a problem.  It was definitely some improvement, but there was still work to do.

 

Rubber Ducks and Go Channels

At this point, it was back to the drawing board.  We knew generally where the problem had to lie, but did not have a specific root cause.  So, we busted out our rubber ducks.

 

What do rubber ducks have to do with it?  The parable goes that a junior engineer walks into a senior engineer’s office and asks for help with a problem.  The senior engineer stops the junior engineer short, hands over a rubber duck and says “Tell it to the rubber duck, then tell it to me,” then turns around and goes back to work.  The junior engineer thinks this is weird, but shrugs and starts to tell the rubber duck about the problem.  “Whenever I call this function, it crashes.  But it’s always on the last element of the list for some reason.  Oh!  It’s because that element is one past the end, so I can’t dereference it!”  The junior engineer rushes off to fix the code.

 

The moral is that often times you just need to describe a problem to somebody else - anybody else - in order to realize what the problem is.  “Rubber duck debugging” is the process of just having somebody else describe to you how the system works and what the problem is.

 

So we started asking questions about how the ZC works and how it spins up and monitors zones.  As a part of this, one of our engineers noticed that we are sending events on a Go channel while holding a lock on the map.

 

Most of our backend services are written using the Go programming language, which has many low-level routines for concurrency and communication.  One of these primitives is called a Channel, which is a queue that can have messages pushed onto it from one thread of execution and received from another thread of execution.  (These threads of execution are called “goroutines”.)

 

We discovered that the system was set up such that the system would take a lock on the map of zone servers, push some data onto this channel, make some request to the Zone Lookup, and then go into a loop monitoring the zone server process.  Ordinarily, this would be fine because the data getting pushed into the channel would be buffered up.  However, the buffer length is finite and eventually the buffer gets filled up as the other routines try to hit up the Zone Lookup.

 

So our procedure looked like this:

  • Start up a zone
  • Wait for some network communication with Zone Lookup
  • Monitor the zone

 

However, due to some slowness in Zone Lookup, the network communication could take multiple minutes to complete.  And, during that time, the routine that is monitoring zones for liveness would come in and see that the zone is empty and isn’t doing anything - or, rather, it appeared that way because the monitoring hasn’t started yet - and then killed it.  Finally, when the network communication returned, it would continue operation and start to monitor an already-dead process.

 

Now, the monitoring would detect that the process was dead and clean it up once it got there.  However, because it was waiting for this network communication that was taking multiple minutes to complete, these processes would stick around until the monitoring call could clean them up.  Thus: zombie zones!

 

A Fix for the Zombie Zones

Now that we knew what was happening, we fixed the code that was doing this so that it could start monitoring the process while it was waiting for the zone lookup.  Once again, we deployed the change, and…

 

Still getting 410s and zombie zones.  It was definitely better, but the problem was still there.  Fortunately, one of our engineers noticed that we were doing this pattern in multiple places, where we were sending data into a Go channel and waiting for a response.  But now we had a procedure for fixing them.  We found all of the instances where this was happening and fixed them all by putting the channel communication into a goroutine so that it wouldn’t block normal operation.

 

Now that we had an understanding of the problem, this helped explain a lot of the issues we had been seeing.  For example, the zone server sends an HTTP message to the ZC and waits (asynchronously) for a response.  The ZC is blocked on its own downstream operation for a long time.  Meanwhile, the zone server is trying to send more HTTP requests and waiting for responses, but the zone server also has a limited queue of operations that it’s willing to wait for.  The end result is that the zone server drops further requests without sending them, so we start to lose things like quest updates or character saves.

 

So, to reiterate, we’re blocking up ZC’s ability to monitor zones, we’re blocking up the zone server’s ability to send messages, and we’re leaving these zombie zones around which occupy capacity that should be going to actual players.  All of these problems stacked up at the same time due to this one pattern that we were using, which caused all sorts of badness.

Stable(r) Service

 

It was now 11:40pm on Sunday night, and we deployed our fixes to a single ZC and started watching it to see what happened.  Everything looked good initially.  We waited for 5 minutes, excitedly - everything was going great!  Then we waited for 20 minutes, impatiently - just to make sure.  After 20 minutes everything looked good, so we started deploying it out in waves to the rest of the servers across our infrastructure in the US, Europe, and Asia.

 

Rolling this change out did require us to kick people off of servers, but we did our best to make sure that the players knew what was going on and that it was a temporary thing as we rolled out stability fixes.

 

After about an hour of rollouts, we declared success on this issue.  The service is in good shape.  Players are able to log in, travel, and play successfully for extended periods of time.  It’s not perfect yet - there are still some issues left that we are continuing to monitor and tackle, but our servers were in a much better place after those first 48 hours.

 

We went to bed late Sunday night (really early Monday morning) and woke up Monday somewhat later morning for a day where our players were discussing skill respecs and other gameplay issues rather than complaining about not being able to play.

 

Conclusion

In any distributed system, there is always going to be more than one root cause of any perceived issue.  We had to dive through about four levels of other problems before we even realized that the zombie zones were an issue in the first place, and then once we tracked down the problem there were several places in our code that had to be fixed in the same way.

 

The lesson here for us is that the first impression of what a problem’s root cause is unlikely to be the actual problem, so we need to constantly be re-evaluating our fundamental assumptions.

As we move forward from this Early Access launch, we want to thank all of our fans and players for your patience as we worked through these problems.  We know that it can be frustrating sometimes, but rest assured that we’re working tirelessly to fix the problems, and that we are listening to your concerns and feedback about the game.

 

  • Guy Somberg

 

 

While we are working to clean up and reorganize our Support Center after Tuesday’s very large patch, we wanted to highlight a few of the most critical, highly reported, and consistent issues players are still encountering and any updates we might have for those specific problems:

 

CTRL + D copies your Debug information to your clipboard. Logs can be found in your %localappdata%\Frontiers\Saved\Logs windows folder. Please send logs and debug information to feedback@echtra.net

 

Luck Tree is still not available to some players who may have deleted it or had their quest break.
https://torchlight3.nolt.io/768

  • Please confirm this is not in your consumable inventory on any character. If you are still missing this item on all characters (account wide) we are investigating ways to resolve this for those players affected. Stay tuned for more information.

 

Reliquary is still not available to some players who may have deleted it or had their quest break.

https://torchlight3.nolt.io/1319

  • If you are still running into this issue after defeating the Kronch please contact us at feedback@echtra.net. This should not be happening and we need more information if you believe your account has no Reliquary available to it.

 

We are aware that some Phase Dungeons are not producing an exit portal. Some players are stating they are getting stuck without the option to move.

https://torchlight3.nolt.io/1358

  • For those who are experiencing getting stuck in an instance, please send your debug info (see above) to feedback@echtra.net with the title “Stuck in Instance” and we will continue to investigate the cause of players getting stuck in various instances.

https://torchlight3.nolt.io/1338

  • For those who are experiencing Phase Dungeons that do not end up with an exit portal, please send your logs and debug info (see above) to feedback@echtra.net with the title “Phase Dungeon No Exit” and we will continue to investigate the cause of players unable to find a portal exit.


Tier 1 Entropy is not working correctly.

https://torchlight3.nolt.io/1353

  • Investigating for investigating a fix for a future update.

 

Energy Spike tiers also are not working correctly.

https://torchlight3.nolt.io/1343

  • Tier 3 was fixed in a patch yesterday.


Resource Nodes stay on the map.

https://torchlight3.nolt.io/61

  • No updates at the moment, still investigating.


Teleporting to Fort should spawn you at the waypoint.

https://torchlight3.nolt.io/317

  • No updates at the moment, still investigating.

 

Players reporting the edit Fort button are no longer available (can still press F to edit).

https://torchlight3.nolt.io/1357

  • Fixed in the hotfix patch yesterday.

 

Map incorrectly shows Bugswat Burrows connects to Protected Trail.

https://torchlight3.nolt.io/769

  • To travel to Murky Miasma, you access it through the "Protected Trail" in Acrid Plains not Bugswat Burrow, the world map is incorrect and should be fixed. No updates at the moment, still working on a fix.

 

Ultrawide resolutions don’t fit perfectly.

https://torchlight3.nolt.io/576

  • We are continuing to work towards a UI scaling option within our settings menu. In the meantime we are reviewing various resolutions for more ways to improve those who have really big monitors. 

 

Multiple duplicate Fort Decorations cannot be deleted, sold, or removed from your inventory.

https://torchlight3.nolt.io/94

  • There are fixes for several of these items in future updates and we may continue to make improvements to ensure items are never stuck in player inventories. They may stay stuck there until a future update/wipe.

 

Boss Chests are not dropping loot. This seems to be happening more for those in groups.

https://torchlight3.nolt.io/1362

  • There is a minimum threshold needed to be included in a kill. If you do not meet this threshold you may not get the boss loot. However, we do believe something funky might be happening, so if you believe you did not receive loot from a Boss Chest like you should have, please send us your debug info and your logs (see above) and send them to us at feedback@echtra.net with the title “No Boss Chest Loot”.

 

Players report getting sent to the wrong place after teleporting. This seems to be more apparent and usually only occurs at times of high server load.

https://torchlight3.nolt.io/623

  • We are aware of random travel issues and we pushed a hotfix yesterday to help resolve these issues. If you have updated and are still being sent to the incorrect instance upon travel, please send us your logs and debug info (see above) to feedback@echtra.net with the title "Incorrect Travel'.
  • If you are getting continuously disconnected from your Fort, try rebooting Steam, this seems to resolve the issue. We are investigating the cause.

 

Achievements are triggering for players upon logging into the game for the first time.

https://torchlight3.nolt.io/386

  • We are aware of this issue and are investigating the issue. We don’t have any further updates on whether these achievements will be removed at launch or not.

 

While trying to travel to or from the Fort, players report continuously DCing until they reboot or use Z to travel.

https://torchlight3.nolt.io/1430

  • We are aware of this issue and are investigating the issue. 

 

When attempting to travel, players get stuck in a blue void.

https://torchlight3.nolt.io/186

  • We are aware of this issue and are investigating the issue. 

 

General Lag & Disconnect Complaints

https://torchlight3.nolt.io/453

  • We are aware of this issue and are investigating the issue. If you continue to experience disconnects, lag, and connection issues, please send your logs and debug information (see above) to feedback@echtra.net with the title “Consistent Lag/Disconnects”

 

As always, you can continue to report bugs and feedback on our Support Center and we will continue to update everyone and send out patches as we resolve problems. Thank you again for the patience, help, logs, and community support as we progress through Early Access and make a better launch for all Torchlightkind.

 

 

tl3-news, tl3-media, tl3-general, tl3-frontpage, tl3-featured, tl3-events, tl3-dev, tl3,

Più recenti Altro

Un'introduzione del lupo di mare, il Pirata maledetto.
Leggi altro
Date un'occhiata alle Patch notes del nostro nuovissimo aggiornamento di Torchlight 3!
Leggi altro
La classe del Capitano maledetto arriverà con il prossimo aggiornamento di Torchlight III...
Leggi altro

hover media query supported