« I'm So There | Main | Leaky Faucets »

Feb 24, 2006

Comments

1.

The usual bottleneck for the number of people in the same area of a world isn't end user bandwidth. Most worlds are constrained more by latency, server CPU, database access and other factors.

2.

If the size of the pipe to the home increases, that will also allow each user's experience of a VW to become much more fluid and comfortable. The long patch download times, the lag, a lot of that will clear up -- at least until designers start pushing the limits again. It will also allow for bigger single shards, if server power follows suit, as one of the chokepoints is the need to communicate a lot of dynamic positional and other information to a lot of clients at once. If I'm standing around with one other person, the server needs to tell my client what the world is doing and what the other person is doing, and do the same for the other person. OTOH, if I'm standing around with three other people, the server needs to do that for each of the six combinations available [1>2, 1>3, 1>4, 2>3, 2>4, 3>4] and so on. I'd think that broadening the pipe to the home would ease this burden a bit and make it easier to push that information out faster. But the sense I get from what rudiments I know is that VWs would benefit more from greater server power than from greater user bandwidth (though that's not at all a bad thing, to be sure).

3.

If bandwidth is truly a limiting factor, it is reasonable to view it as N squared, if you assume that each of N players, when he makes an action or change, must communicate that change to each of N other players.

However, it is my own view that with increasing bandwidth you will not merely increase the number of users by SQRT (100), but rather improve the quality of interaction between the users (i.e. increase the amount of information that can be transmitted with each change).

A reasonable analog for consideration is the growth in interaction which has occurred over the past thirty years with the development of bandwidth. 30 years ago, all of us had access to telephones, a relatively low bandwidth device with limited computational potential. As the bandwidth through internet has dramatically increased (arguably by 10,000X or more), it has not increased the number of players using telephones and other remote communication devices (such as computers, cell phones, etc.), but rather the quality, nature and frequency of their communications.

P.S. When considering bandwidth limitations, N is not necessarily the number of players in the game, but the (average) number of players with which each player is interacting. Even in a game of 10,000 players, the average player is only directly interacting with a relatively handful at any time.

4.

The bounding factor for the number of people in an area is on the server more than in the pipe (here's a "lipschticking the chicken" moment - in Asheron's Call if too many people gathered in one area mystical purple bubbles would surround you and teleport you a short distance away!). Regular broadband (ISDN/cable) is terrific for making client-server packet size and frequency much less of an issue in MMO development.

What a 100x increase in bandwidth to the home does is make waiting time for packaged entertainment into a non-issue. You could download a full movie or game in a matter of seconds, or essentially zero time with effective background streaming. In terms of games, this could further erode the stranglehold that retail has on distribution, potentially make for new revenue models (pay for your MMO on your cable bill along with HBO), make more frequent updates of online games feasible, and even make more dynamic worlds possible -- one player changes the world geometry which is reflected on the server and blasted to all the clients.

All of which buys you crowds, just indirectly. :)

5.

One can have the best broadband residential service available, and still suffer from inadequate game servers and databases. WoW has been having that problem as of lately, to the great frustration of many long-time players.

6.

Assuming that the bandwidth at the server (and the server capacity) increase, and all other options remain equal, I'd say that your estimate is valid.

The server load will increase to a degree, but while the server might be tracking the movements of 600, it might be communicating the changes of only the 60 within your visible range. Extending the visible range so it's tracking all 600 will be a communications increase, and not as exponential of a processing load increase, since they were all tracked anyway. The only load is that the number in "visible range" increases.

Heck, if bandwidth was so broad that ALL user locations for a zone could be communicated ALL the time, the server has one less task to do (filtering each communication for visible range). It could just stream all the data. NOTE: that's not always a good thing, as giving the client extra info just increases the opportunity for client-side hacks... and increases client burden.

More than crowds, more bandwidth COULD mean more interactive environments, as state chages for objects (damage to structures, moveable objects, etc) could be communicated more easily.

Bandwidth boosts COULD also lead to the development of more peer-to-peer systems (with all the hack problems of any client-sided management).

--
Another line of thought: oftentimes, I read how MMO's must be forgiving of position-dependent powers due to lag concerns- for a backstab to work, you must be close behind your prey, but facing changes happen so fast, that the prey facing may be quite different from what you see on the screen.

Now, MMO's appear to be presenting a better illusion here all the time, but there are plenty of incidents where what APPEARS TO BE a valid move is marked as invalid, specifically because of the mapping differences.

So, a broadening of bandwidth could mean more frequent updates to the client, allowing for more accurate positioning and more effectively-applied position-based powersets.

7.

I'd agree bandwidth is a limiting factor, however I'd guess that the service provider is the one who is limiting bandwidth anyway...

8.

Actually,

An increase in bandwidth doesn't give you any inherent ability to handle more users. Systems and networks are built on a whole, and the upgrade of a single component wont necessarily give you any more access if thats not the issue at hand. The real challenge at hand is more traffic flow and handling large numbers of concurrent connections. Just because you now have 100x the bandwidth wont mean that your database connections can handle the extra loads, nor your routers, nor your authentication servers (Blizzard I'm looking in your general direction....)

how is that going to affect online games?

More bandwidth to the end user really only implies the ability to transfer more information, making it faster to push new content to the user. It might also allow for more dynamic content in the form of minor zone changes that might have been prohibitive with a large number of low bandwidth users.

9.

I doubt that increasing client side bandwidth is going to have a large effect on a issues such as lag and crowd sizes. Most of the network traffic associated with things like player or MOB location is likely to be small UDP packets. This traffic is unlikely to amount to any significant percentage of a clients network bandwidth.

10.

With very big groups routing becomes an issue as well as bandwidth. An approach often taken by academic networked virtual environment systems like MASSIVE-3 is to map groups of mutually aware players on to network multicast groups. Multicast pushes the routing problem in to dedicated hardware across the network and means that information only needs to be sent to each network once, so if you have 2 people seeing the same things in the same physical house, each packet is only sent to the house once. Unfortunately multicast isn't widely available in the wild and so most MMO servers end up sending individual unicast streams to every player, even if they are in the same house and seeing exactly the same things. You could build a software multicast infrastructure by locating servers at each ISP and only sending information to each ISP server once, but the extra servers would increase latency and be an extra pile of hassle for MMO operators.

11.

Increasing bandwidth sounds to me like increasing graphics quality: you improve the ability to provide a better experience until a certain threshold, and then it stops being interesting.

Whether that ability will come in the form of "more players per server" or "better service per player" is debatable.

Oh yeah, and in the spirit of "IANAL", IANA System Administrator.

12.

I think Chas York put his finger on the fly in the ointment (if you'll excuse the unpleasantly mixed metaphor) when he pointed out that "giving the client extra info just increases the opportunity for client-side hacks". Once the client has enough information to render the player's immediate surroundings, anything beyond that is counterproductive.

Adding more people to a virtual world is purely a server-side issue. You could probably increase it by giving the server more bandwidth, but we're talking about home bandwidth here, not server-side. The client is only seeing the avatars within a short distance of the player's location, a small crowd at most, and already has enough bandwidth to handle those. The fact that many modern MMO games can be played perfectly well over a 56K modem is further proof of that.

The big problem with crowds is graphics bandwidth, not network bandwidth. There seems to be a common belief that graphics hardware is now about as good as it needs to get for games, and any further development will be only incremental -- slightly prettier shaders. Wrong. Modern games that appear to have near-perfect graphics, the likes of Half-Life 2 and Guild Wars, are dealing in carefully engineered illusions. HL2 gets away with its superb character modelling by severely restricting the number of human beings on screen (never more than about four at a time). GW cheats by switching to low-rez models when you're in a crowded town (leading to frequent complaints on GW forums from players who saved up to buy highly prettified uber-armour and then found they couldn't show it off in town). Displaying hundreds of people on screen at once, all in the kind of detail we've come to associate with modern games, is still way beyond the capabilities of current high-end PC hardware.

Michael Chui wrote: "Increasing bandwidth sounds to me like increasing graphics quality: you improve the ability to provide a better experience until a certain threshold, and then it stops being interesting." I agree, but it's widely believed that we've reached that point with graphics, but still have a way to go with bandwidth. I'm pretty confident that (at least as far as MMORPGs go) both of those beliefs are wrong.

Bottom line: I think Edward is wrong. MMORPGs already have more client-side bandwidth than they need; any further increase, even one as drastic as a factor of 100, will make essentially no difference.

13.

Increasing bandwidth is not the same thing as decreasing latency. It seems to me that many VWs have artificially constrained interaction to disguise latency, and more bandwidth won't change that.

14.

sqrt(100X) = 10*sqrt(X)

15.

The proposition (algebraic correction applied) is reasonable ceteris paribus, with the assumptions that:

* you are talking about consumer bandwidth/cost;

* client bandwidth is related to the economic utility a consumer derives from playing a game;

* you are ignoring competing consumer applications, as mentioned above, which may reduce effective bandwidth available to games;

* bandwidth supply discontinuities (the step function) are irrelevant to your analysis.

I believe at least some of these assumptions are likely false. Mostly #2. Client bandwidth is a requisite for consumer experience, but it hardly is sufficent in and of itself. The implication is that the state-of-the-art software and design makes use of this extra bandwidth in a manner which ultimately drives more demand. From the "troubles with tribbles" topic, this is an extremely liberal assertion. Game developers need to figure out transaction integrity, distributed state, and multithreading--to mention a few issues--first.

16.

Ross Smith writes that "MMORPGs already have more client-side bandwidth than they need; any further increase, even one as drastic as a factor of 100, will make essentially no difference."

This is arguably true for stock MMORPG designs. Stereotypically, all the bulky data is at the client, and you're only transmitting object position updates, and basic descriptions of characters. (I've read some nice articles about compressing those datastreams, but if I recall correctly that was to fit over modems.)

But in my experience Second Life suffers quite a bit from bandwidth problems because the world model and textures aren't at the client - they're being streamed from the servers. If I teleport or fly at top speed into a new area, it can take more than half a minute for all the geometry to appear, one CSG object at a time. If they had 100 fold more bandwidth, you wouldn't have nearly the lag. (Presumably server capacity would have to increase a lot for building complexity to grow big enough to offset the bandwidth gain.)

So this gives us another way to look at the comments about lack of patching delay: we could see architectures shift towards less data being stored at the client, instead being dynamically streamed from the server.

Given how slowly disk access speeds are increasing, we might even get faster gameplay bringing models and textures over the network: zoning in AO or EQ2 is annoyingly slow, even though it's mostly loading data from disk (?).

17.

Tom Hudson said> So this gives us another way to look at the comments about lack of patching delay: we could see architectures shift towards less data being stored at the client, instead being dynamically streamed from the server.

Preach, Tom! You've hit the nail on the head. That's the big pay-off.

Historically -- and I'm talking about everything from the railroads to fresh water, here -- there is no such thing as "extra" bandwidth, "bad" bandwidth, "too much" bandwidth, etc. There's only bandwidth that hasn't properly exploited... yet.

With fat pipe coming into everyone's home, at 100X current cable-modem speeds... just about all the processing could be handled at the server side. Now, before you get all up on me with the, "That sucks now! Server lag is already a huge issue!" Stop and think for a sec. The reason it's an issue is because of the volume of control code being passed *back and forth* between the server and the client. You're running the game software on your PC and then pumping all the results of that through the current cable, to be processed at the other end by the server, bumped up against data being collected from other people's clients and fed to the server, etc. X the number of people playing.

With really, really fat pipe, you can actually *play on the server.* The question I have, is 100X actually fast enough for that? Is a 100X cable modem speed about as fast as my keyboard and mouse pass instructions to my CPU? I don't know. I'm not that much of a tech-head anymore. I haven't installed a hard-drive since 1994...

At the point when you can pass direct program control -- instead of data swapping -- through your pipe, then you can support scads more people "happening" at the same time. Because, right now, the server lag isn't "caused" by there being too small a pipe... but by there being too *large* a pipe. There's too many people all swamping the server with fat amounts of requests to update their own versions of the program. When there's only one copy of the program running -- on the server -- and the only thing that's passing back and forth is graphics (which, at 100X is a breeze) and game control (which, as I asked... I'm not sure 100X is fast enough for), the server should be much less stressed out.

Unless I totally have my head up my barrel. Which has happened before once or twice.

18.

Firstly, the technology for greater common access speeds delivered over the "last mile" have been around for over 15 years; it is just now cheap, reliable, and well established. This only increases the lower bound of common access.

It says nothing for aggregate pipes for data center access and backbones, which is increasing on a seperate timescale. Such disparities in common and aggregate timescales will most likely herald in a new age of QOS rate scheduling for the average consumer.

It further says nothing about the demands for increased server technologies required to effectively use the bandwidth, which, again, are increasing on yet another separate timescale.

Another consideration is that such bandwidth to the home will also be a uni-aggregate as televisions, stereos, telephones, computers, game consoles, and home/health/security monitoring devices, among others, compete on a single high-bandwidth access medium.

Peer-to-peer communication may not be greatly effected as I consider such increases in bandwidth to the home to be strongly assymetric (average ingress rate limit of the home much greater than average egress rate limits).

Without massive increases in transmission speeds on aggregate lines for backbone and datacenters, custom on-demand multicasting through such protocols as multi-protocol label switching (MPLS) label associations across autonomous networks boundaries will create a hierarchy of service-based autonomous system layering which can allow for efficient transmission of on-demand subscriber based broadcasting services for television, radio, and gaming. Keep in mind, technologies like MPLS is what will allow telecommunication companies to roll out their QOS rate strategies to segment services like VOIP, cable television, and Internet radio in the first place.

Given that, I imagine small virtual world effects to be significant. Small map and small zone (or instanced) worlds such as those games like Tribes, TeamFortress, and S.O.C.O.M., perhaps even DDO among others, can greatly improve as those worlds work efficiently with multicasting solutions. Large zone worlds are inefficient for multicasting (efficient for server, inefficient for client). In this sense, team-based multiplayer games and instanced worlds will probably be better suited to the home bandwidth gains. What will the changes be? Reduced lag, better prediction of coordinates for remotely defined objects, greater effect detail in graphics and sound and increases in high-detail user-defined sound/skin palettes and plug-in mods.

19.

Actually, the size of the pipe to the home is UTTERLY IRRELEVANT. Except maybe that you get updates more quickly.

Look, an MMO provider pays for bandwidth. If you've got smart negotiators, you can probably get your cost per gig down into the 30 cent range. Game players in general, and MMO players in particular, are notorious bandwidth hogs, both because they play for hours on end, and because MMOs require transmission of a fair bit of data (particularly if you're in a busy area) during those hours. Typically, the cost of the bandwidth a player consumes -all by itself- costs an MMO provider 20+% of the revenue they reap from the monthly subscription.

So let us suppose that the consumer can now consume five times as much bandwidth. Uh... Do I want to let him?

Five years ago, Gordon Walton said (at a conference in the UK) that SOE purposefully metered bandwidth to consumers so that even those on a broadband connection weren't getting data any faster than those on a 54K dialup connection--in order to limit their bandwidth costs. Bandwidth costs have come down, and I imagine those limits have been relaxed.

But controlling their costs is still going to be a major consideration for MMO providers. And so long as they can provide an adequate user experience to players by throttling down the data transmitted per time, they will continue to do so.

From my perspective, this is a non-issue anyway. We had perfectly okay online games in a 9600 baud environment--we just had to be clever about how to use that. A factor of ten improvement in bandwidth to the end-user may make Blockbuster go away as everyone gets video on demand--but it's not going to make that much of a difference for games.

20.

Hi all. Hope all are well as it has been a while.

After reading the above comments, I have to say that you guys have covered most of the points I had.

I like to read more of your comments on what 100-fold increase in last-mile bandwidth can enable.

Jim Purbrick made a point about multicast as a method to reduce the bandwidth bottleneck and cost from the server side. Others raise the effect of P2P and other related tech improvements needed.

I think new technologies and techniques will be developed to take advantage of the bandwidth to increase number of concurrent agents tracked, the richness of information multicasted, and the breadth of p2p communication and information peer-casted.

For example:
1. Voice chat is likely to be highly leveraged by new MMO designs.
2. In group play, the designated leader's machine can acts as supernode for group mates, enabling much better hosted online D&D sessions and other cool gameplay (like more exotic tabletop roleplaying paradigms)
3. High level of convergence and application of Web 2.0 (like a MMO based on Google Maps where there is only one instance of the game with tens of millions of player online at the same time).

They're all pie in the sky thoughts, but a 10x increase in number of concurrent active agents tracked sounds reasonable if not conservative (disclaimer: IANA Programmer).

Frank

21.

@Frank: I agree with you, but, without taking the thread too far off track, I was wondering how likely you think #1 is. To me, the community/liability/moderating headaches associated with integrated VoIP make it extremely difficult to implement very broadly, despite reasonable technology improvements. For example, I just don't see how you can have language filters in VoIP in any meaningful sense for some seriously long period to come.

Thanks

22.

Andy Havens agreed with me, but I want to disagree with nearly all of his conclusions.

You by all means don't want to run the simulation on the server. That means that ALL reaction to user input happens there, on the far side of whatever network latency you have.

This is where I think Second Life has the design wrong (as I understand their design, which isn't much), instead of merely waiting-for-technology-to-catch-up, or maybe waiting-for-cleverer-algorithms. No matter how clever they get, as long as their server farm is in San Francisco I'll have about 70 ms of network latency in every response to every action I take; if they tried to be distributed, and have some servers close to me, it'd just mean that anybody else in those plots of land would see a corresponding increase in the latency. I'm not aware of great human factors studies in this area, but the ones I think extrapolate best say that users will start noticing when total latency is over 30 ms.

The military (NPSNET is the case I'm most familiar with; Mike Zyda, who did a lot of that when at NPS, is now down at USC with the funds to start up a gaming/simulation program) has done great work on increasing the amount of simulation that can be done at client systems, and doing clever adjustments to keep everything in sync. So vehicles report their position, velocity, acceleration, and then don't have to send out another udpate until their position diverges beyond some error bound from what the other clients would have predicted; at that point, the other clients can do some fancy spline interpolation to fix the error up without appearing wrong. Once you have that interpolation code working, another 30 or 50 ms of latency doesn't kill the realism of the motion paths you see.

I believe some of the FPS developers have had to pay attention to that, too, since different players will have different impressions of their relative positions at a point in time and thus different ideas of whether a shot should hit.

Andy also wrote: "Because, right now, the server lag isn't "caused" by there being too small a pipe... but by there being too *large* a pipe."

The lower bound to server lag isn't pipe diameter at all, it's pipe length. And the lower bound timing of that depends on the speed of light, which I don't expect us to improve upon any time soon. "lag" is an annoying word, since it conflates various underlying problems; removing bandwidth-related "server lag" ought to fix streaming world updates, but all the other components of latency will still be there, enough to kill the idea of extending your feedback loops across the network.

So I'd argue that the smart way to do things is to do as much simulation as you can on the client side, because there isn't network latency adding to the staleness of those computations. Keep it updated at some lower rate with a world description from the server that isn't zeroth-order, so that you can compensate for latency.

Disclaimer: I did my dissertation on this, so I'm really biased.

23.

Wow, someone introduces a two order of magnitude change to the system, and everyone's reaction is "well, the current system won't work any better because of it." That's because the current system is going to get broken when the change happens! This is exactly the kind of change discussed in Clay Christensen's Innovator's Dilemma. When such a change happens, what we know is that people >will< take advantage of it, and it is least likely to be the current generation of online world providers that figure it out first.

I'll give an example of a way I would think about using 100x bandwidth to change the online gaming world. First, I'd Napsterize the game space. While there would be a central authentication and geography map, the central registry would never be used to move serious bits back to player's desktops. Instead, I would move the world down to many, many users machines, having them host local servers for a given geography. I'd measure latency between any player and these servers to figure out which neighborhood they fit in and then they would connect to that server to play the game. This of course implies that the world would be instanced by player's real-world location by default. We can discuss the benefits (realism) vs. costs (hey, why are we de-virtualizing the game space).

On top of this, however, I would create a method for aspiring dungeon masters to not just host pieces of the pre-made world as defined by the world developer, but to create their own worlds. Since I've alreayd taken the massive infrastructure migration hit of moving to user's desktops, with all of the inherent security and authentication problems, moving to user created content is a much smaller problem. The reason I think that the combo would be a killer to wow, eq, etc. is that end-game players (myself included) spend most of our time waiting for the feeble amount of new content provided by the vendor. It is kind of a sad thing that your guild's reputation is often built by being first to complete an instance or fastest to complete an instance. If virtual worlds should be infinite, then the content has to be infinite too. That's not going to happen unless user's become the creators. In a world with 100x bandwidth, we could begin the migration to a true distributed and infinite gamespace.

24.

John Danner hit the nail on the head. The problem with most of these type of discussions is that nearly everyone assumes a linear continuation of current trends, with the same architecture only "more of".

This type of blinders-on futurism has historically failed for two reasons:

1) Technological progress accelerates, at an accelerating rate of acceleration. It only appears linear if you squinch your face right up to the monitor so your eyes can only see an historically insignificant chunk. Curves appear straight if you look at a short enough segment, and even when a technology is on the verge of crossing the elbow of the hockey-stick, most practitioners simply assume the current line will continue forever.

2) Progress only seem seemless and continuous when viewed from a distance. Close up, as has been exhaustively documented over many, many technologies, technological progress tends to be marked by a series of S-curves, with dramatic, disruptive discontinuities forming leaps between them.

It is not hard, in my opinion, to predict what the next technological discontinuity will be in our industry, in broad terms, because it merely follows on the same shift that is happening in all communication technologies; the shift will be to more peer-to-peer and less client-server architecture; this will allow us to exploit the dramatic rise in bandwidth--and will force conceptual changes in game design.

Current designs depend on never trusting the client, making P2P MMOs impractical. In a world of rapidly expanding bandwidth, this is far too limiting a constraint.

New game designs will emerge that exploit, rather than limit, growing bandwidth, and new technological architectures will emerge to support them (or, the architectures may emerge first, to be exploited by new game designs. It doesn't really matter which comes first, both *will* happen).

25.

The server side already has gigabit networks. If the client side increases 100x, but the server doesn't improve to terrabit, I'm not sure you'll get an increase in shard size. You might actually end up with a decrease in shard size as fewer clients will be able to saturate the servers.

Which applications drive the 100x bandwidth increase? Is it on-demand video? That doesn't need any upstream channel and latency can be several seconds without any problem. On-line gaming might actually be worse off with that kind of app sharing the net.

If we get 100x bandwidth designed for video conferencing though that could change the way people play. It opens up high def audio channels as the basic way of communicating which enables games to be played on a console by non-typists. Throw in some voice modulating stuff so you can sound like an Orc and that could be really nifty.

26.

For all the reasons folks have given above, consumer bandwidth is highly unlikely to increase the number of characters which can interact in a virtual space. But what will bandwidth do?

Bandwidth allows digital distribution of content. This means you will see more games with AA or perhaps even AAA production values being distributed online, or at the very least for mature AAA games (WoW & etc) you will see more downloadable trials. Retail space is one of the tyrants of the game industry -- the fact that you *must* sell X units per store per week to keep your game in the shelves, and the shelves drive a gigantic number of all purchases, mandates making certain business model and design decisions. You have to be the big boy on the block with mass-market appeal for your business model to make sense. Digital distribution allows you to target the Long Tail of the online games space -- if you can have a team of 5 make a game that can be enjoyed by 10,000 people, you have a viable business opportunity (that game does not scale to retail, but if it can be downloaded *it doesn't have to*). Incidentally, the prohibitive cost of developing AAA content will, in the short term, mean that most downloaded games continue to be closer to Puzzle Pirates than to WoW, but Puzzle Pirates is both a very polished title and would be impossible in a non-broadband environment (you can't expect your customers to download 10 Mb over a 14.4kbps connection... but at 56k thats reasonable time to wait for content you really want to see and at broadband its an impulse-download).

Even for AAA titles, which will need to have a box on store shelves in the short term, digital distribution allows you to milk a larger profit on "box" sales from a particular segment of your client base (say, early adopters or ones with lots of brand loyalty). Best Buy will probably not carry your game if your download competes with it based on price, but they might well overlook simultaneous digital release (with preloading) and retail release, which essentially gives your digital version a 1-2 day window over the retail channel. Given that take per customer on a digital download is over double what you clear for a retail sale...

Broadband has worked out very well for the Galactic Civilizations folks, who saw digital distribution orders for Galciv II (which is an AAA title in the strategy game field -- FMV, very polished interface, the works) at 10 times what comparable sales for Galciv I is. Thats partially a function of a rabid pre-existing fanbase from Galciv I and partially a function of a 50+ MB download being trivial for their core customers nowadays. (I was mildly irked at downloading because I was getting *only* 1 MB/s from their Asian mirror when my internet connection is rated at 50) They've got an interesting take on the business case for having side-by-side retail and digital distribution.

27.

Bandwidth doesn't buy crowds. It accomodates crowds.

28.

I just want to say the idea of voice modulating everyone into game-world appropriate themes (Orcish) might actually make the whole idea of ingame voice chat palatable.

I can't imagine that speed of light level latency would be a hindrance at all to any sort of interaction? I'm not sure how to rationalize the average human response time of 200ms to network latency, but from personal experience, sub-20ms latency is near instantaneous feedback as far as the user experience is concerned.

It seems as though there is still a lot more benefit to offloading things to the client; with faster bandwidth, is it conceivable that you could keep a client in the dark about what affect their computations have on the game world, and still derive benefit to performance? Again this still hinges on the latency issue rather than bandwidth... From my perspective, it doesn't appear to matter how much bandwidth you have if it's not acceptably responsive.

29.

P2P MMO games would be able to scale much more easily than the current ones, but the scope for cheating must be controlled lest it blow up in our faces. This turns out to be really hard to do. The best line of thinking I've come up with so far is some sort of cryptographic affirmation/witnessing scheme, where you simulate the same chunk of the world on various different systems, suitably anonymised against one another to discourage collaboration, and have them cross-check one another.

30.

Peter, not all solutions are technological. The problem can be tackled as a design and social architecture problem, too, a matter of looking at an old problem in a new way.

By way of analogy, open source companies don't have to worry about IP theft.

One way of dealing with the problem of cheating on the client is to create game designs where there is no benefit to the cheating client.

Just off the top of my head, other solutions might be found via an absurd lateral thinking experiment: what if all clients were allowed to cheat all the time--or were even *required* to cheat all the time?

(The purpose of the thought experiment is not to pursue absurd solutions, it is to find original ways of dealing with the problem and to escape the thinking-trap of the escalating arms race with cheaters.)

Clearly, in a P2P architecture, continuing to assume that "the client is the enemy" is a counter-productive strategy.

31.

I guess I need to be counted among those who imagine a two-orders-of-magnitude increase in pipe size not really doing much to increase the number of players in the same area on a shard.

But could this have other, even bigger effects? I think so.

Assuming for the moment that game servers will contain 100x more useful data than they do now, and that their processing capabilities will also increase, one area of impact for a bigger pipe could be a dramatic increase in the "world" aspect of game worlds.

Having more bandwidth for sending world effects information could allow more world effects to be modeled. Even if dynamic data is server-bound, a lot more static data could be sent over a bigger pipe. Instead of just sending relatively simple data on terrain, vegetation, and static objects, each of these things could become much more complex, and other types of static media describing aspects of the game environment could be added to create a much denser/richer experience.

Another use for a bigger pipe might be a little more radical. In practice (currently), the amount of server-to-client data is much greater than that sent from client to server. Having a bigger pipe might change that -- what if the reason for the disparity turned out to be not enough bandwidth?

What if the bigger pipe was the key to that old will-o-the-wisp, Virtual Reality?

Perhaps a 100x bigger pipe would allow consumers to transmit high-quality video and audio from their home in real time to a server to other consumers. In addition to other uses for this capability (*cough*), wouldn't some online game developer find a use for it?

Would some version of Live-Action Role Playing be possible using this technology? How about if processor capabilities improve as well, so that raw image information can be modified on the fly -- a large, middle-aged white male waving a baton could be digitally altered into a young female Dark Elf wielding a Staff of Striking.

More ones and zeros means more realistic details of the audio and visual components of reality can be transmitted. I like to hope this means that the world-y aspects of MMOGs will catch up to the gameplay, but then I'm one of those odd Explorer-types. :-)

--Bart

32.

Greg mentions that the hosting cost is about 30 cents per gig. I think you can get it just a bit lower than that, but I believe he is right in saying that what really matters is how much it costs the people hosting the servers, not the bandwidth to the client. Additionally, since considering nearby neighbors in an MMOG is n^2, you'll also have rising CPU usage. This could mean that it costs more to run a game that tries to take advantage of large crowds.

As others have mentioned, if you have more players on the screen, you will also use (probably) use more textures, draw more polygons, play more sounds, render more light sources, shadows, etc. This would be another major obstacle.

Peer to peer is a very interesting idea, but its still at its infancy. You can use it to download fairly static content (map data, textures, etc), but not to get player location data, or combat resolution data until you resolve the client trust issue.. which sounds like a tough problem. Also, I'm not sure if the we're talking about a 100x improvement in upstream also. Most likely, its mostly downstream.

Multicast is interesting, but it currently suffers from routing and reliability issues. So you'd have to deal with dropped packets somehow, perhaps with redundant data or some other error correction, or out-of-band data requests.

I personally think more processing power would be far interesting for MMOGs... Server-side physics and cycles for deeper mob AI. Maybe someone needs to make Mob-AI-in-a-chip company.

33.

Chris York posted:
"So, a broadening of bandwidth could mean more frequent updates to the client, allowing for more accurate positioning and more effectively-applied position-based powersets."


This is not true. A broader pipe does nothing for latency which is the deciding factor in keeping the simulation on the client and server in sync for position based attacks. The reason that people claim cable is better than 56k is because there are real improvements to latency on cable networks. Most of these networks went from 100ms 56k latency to single digit latency. The problem is that going from Cable today to 100x cable speeds of tomorrow won't give you a similar decrease in latency between nodes. The lag monster will still exist and 30 - 100ms total system latency will still need to be anticipated. Even if the broadband providers can get latency down to <1ms on the Internet, there's intermediate routers, motherboard bus latency, OS latency, server latency and a host of other things that would need to change to get total system latency down beneath 1ms. Only then will the more frequent updates mean anything to the client in real terms.

Even on a closed network, such as those that manage clustering for servers, managing <1ms latency is difficult. The Internet complicates the entire process because it replaces that closed network with an open one that was not designed for timely information delivery; it was designed for reliable information delivery(at some point in time).

34.

I remember an excellent article by Jessica Mulligan (on Skotos, I think) regarding the problems with increasing bandwidth as compared to decreasing latency. That said, there is no harm in making avatars bit more long-sighted, and being able to push the fogging-range a bit further out...

Endie

The comments to this entry are closed.