« Does Bandwidth Buy Crowds? | Main | Raph's Rants »

Feb 25, 2006

Comments

1.

Wow, I can tell you right now that Grady's ball-park estimate of $100/SLOC clearly does not apply to all projects. Achaea alone has over 600k lines of code and we definitely did not spend 60 million dollars developing it.

--matt

2.

Wow, I can tell you right now that Grady's ball-park estimate of $100/SLOC clearly does not apply to all projects. Achaea alone has over 600k lines of code and we definitely did not spend 60 million dollars developing it.

I think Grady's estimate is based upon Theoretical SLOC (TSLOC?). In practicality, we make heavy use of various patterns of design (implicitly or explicitly), which expand the TSLOC into many SLOCs. I propose that the gap between theoretical/academic methodology--Grady's field of expertise--and design/development technique is quite large and, unfortunately, probably growing.

3.

I'm interested to see what Booch has to say and in aspect-oriented programming (at least as a set of programming patterns). However, I think that gap that Randolfe mentions was almost unbridgeably wide the first time I encountered Booch in about 1987 -- when I was in a group working with C++ v1.0. The theoretical/practical gap doesn't appear to have shrunk since then.

That said, much of game development clearly lags behind other areas of software engineering in our use of robust software architecture and methodology (as opposed to hacky, expedient, patch-it-later code). So this may all be theoretical not-very-useful stuff, or maybe there's something we can actually apply here.

4.

Matt>Achaea alone has over 600k lines of code and we definitely did not spend 60 million dollars developing it.

25 years ago, there was a maxim that the average programmer produced an average of 60 lines of (working) code every day, whatever the language. This would mean it took 10,000 programmer-days to create Achaea. Is that a reasonable estimate?

Richard

5.

Richard wrote:

25 years ago, there was a maxim that the average programmer produced an average of 60 lines of (working) code every day, whatever the language. This would mean it took 10,000 programmer-days to create Achaea. Is that a reasonable estimate?

No, definitely not. 10,000 programmer-days is far more than we've invested into Achaea in terms of programmer resources. We produce far more than 60 working lines of code each programmer-day, though to be fair, our tolerance for bugs is probably higher than, say, Blizzard's.

--matt

6.

On the face of it, 60 seems a bit low...

So far my rate has been 440 lines/day, but that average will go down somewhat when I get into full bug-fix mode.

However...

Having worked in teams and alone, a single person is much more efficient than a group. In my guestimate, a group of 10 programmers working 1 year is only as efficient as 1 programmer working for 5 years.

Plus, I know that I'm a more efficient coder than average. The effectiveness of a really good programmer compared to one that's on the verge of being unemployable, is a factor of 10. The effectiveness between a really good coder and an average coder is 2x or 3x.

The probability of having 10 really good coders in a large group is very-very small. The larger the group, the more "average" the coding quality tends to be.

So, maybe 60/day isn't so far off for a group of 10. A group of 100 programmers on a project might be getting more like 40/day.

7.

Another fairly immutable law is that output will drop per programmer above about 5/group.

100% accurate in my experience.

8.

Also, measuring LOC for programmer efficiency is a bit antiquated as an atomic metric. With object-based and object-oriented systems a large factor in programmer efficiency is wrapped up in knowledge of the core libraries, the domain libraries, and the degree to which those libraries are well designed themselves. Further, efficiency might not be measured simply at an application objective level, but also in the ability of designers and programmers to contribute to an evolving set of domain-specific libraries.

9.

So, if measuring LOC for programmer efficiency is out of date, would you say he Booched it? ;)

10.

SLOCs are problematic for reasons everyone mentioned above. However, keep in mind that costs associated with 'em is not just "cutting the line of code" but also all the overhead and maintenance (over the lifecycle) of that codebase. Clearly factors such as lifespan and overhead such as integrating/mgmt/ testing/requirements analysis etc. vary according by type of software project. A "mission critical" app spanning 14 contractors is in a different league than, say, a downloadable game.

BTW, take a look at slide 7 ( http://www.cs.nott.ac.uk/~nem/complexity.pdf) for one taxonomy of range of software projects.

Having said all this, and concuring with the crowd (w/out objective basis however) that 100$/sloc feels high for games industry, is it fair to say that as the industry seeks to tackle larger projects (more content) and seek to leverage a longer-lived codebase (maintenance etc) -IP/reuse etc. that the overhead contribution to sloc will grow?

11.

Good post, Nate.

I'd like to hear more about parallelism and its impact on development, as well as how OO can assist in simplifying the problem.

As we start to accelerate the introduction of multi-core systems, it seems to me that parallelism will become a more looming issue in architecture discussions.

12.

"leverage a longer-lived codebase" - I suspect the main problem is that games are on very tight budgets, with tight deadlines, and squeezing as many FPS in as possible. Coders know this, so they write code that doesn't take much time to write, but which isn't maintainable. (The optimizations for FPS take time, but they end up making the code less maintainable.)

Ideally, when coding, you like to design an infrastructure for (a) the features that need to be implimented 1.0, (b) the features that will probably be implimented in 1.0 but which management hasn't admitted to yet, and (c) the features that management will want in 2.0 or maybe even 3.0. Anyone who thinks they're in the project for the long haul will take the time and plan for a,b,c up front. Someone who's rushed (or inexperienced) will only deal with a. When issues b and c come around, the code's infrastructure is hopelessly inadequate.

Multi-core - It's all about clearly defining modules (you can call them objects if you want, although OO programming has virtually nothing to do with it) and the CLEAN communications between modules.

13.

Grady's income depends on you believing software is expensive and getting more expensive. We've been on the brink of a huge software crisis for 30 years... lol

The reality is that software is cheap and getting cheaper. How else can you explain companies moving features out of hardware and into software? Video games are enormously more complex and entertaining than they were 20 years ago, but I suspect that, when adjusted for inflation, they cost about the same.

14.

I should disclaim that I may be a little cynical because the companies warning us of the impending software crisis are also the ones foisting crap like Enterprise Java Beans onto us.

15.

http://www.mischiefbox.com/blog/?p=287 gives some hint as to what is going on: games are built to throw away. Not only that, but I understand that the games industry also is worse at handling deadlines and imposing pressure on developers, and that its developers are even more disproportionally young than in other parts of the software industry. Young developers like reinventing the wheel from little experience.

So I suspect a lot of the cost is in wheel reinvention, inappropriate optimisation, poor architecture and re-architecture, and the general flailing that software projects are prone to without very careful management.

16.

I should disclaim that I may be a little cynical because the companies warning us of the impending software crisis are also the ones foisting crap like Enterprise Java Beans onto us.

I share your cynicism. I've found that it's hard enough organizing major systems around solid modularization, good development techniques, and quality; let alone ambitious abstractions like EJBs. How many times have you encountered a team (mis)using EJBs as a persistence proxy layer?

17.

(above comment was mine; forgot to log in on this cptr)

18.

"ambitious abstraction" :)

I'm not sure whether to blame the producer or the consumer there -- there's probably enough to go around. Consultants did a pretty good job convincing people they needed a 100% pure Java transactional front-end to their sadly outdated database servers.

One thing that has changed a lot in 20 years is the amount of global knowledge shared between implementors. These days you just have to trust your colleagues and suppliers know what they're doing. The market eventually sorts out the bad apples, but that can be a painful process. I think it's a sign the industry is growing up, but that doesn't make me happy. (The automotive industry is all grown up too. Aren't we proud?)

19.

My somewhat jaded take is that most of this is old territory. So I'm always surprised when the same people who said in 2000 that multi-processing-enabled middle-ware (Intrinsic Alchemy, in this case) was way too complicated for making games are now banging their heads and hashing out much more complicated solutions.

On abstractions, the key test for me isn't 100% coverage of a problem domain, because thet never happens. It's simplicity, performance, and expressiveness. Minimize the number of hoops the developer must jump through to express a concept. Have the resulting machine code be small and fast.

On glue code, right now it's up to middle-ware developers to make interop easier, and there's little incentive for them. It's way too easy to wind up forced to integrate N different matrix4x4 classes, bloating code in a big factorial mess.

But standardization isn't necessarily the answer either: there are sometimes reasons why someone can't use std:string. Alchemy tried to solve this with some extra reflection to automate gluing external modules to their middleware layer. But that didn't fly.

Personally, I blame the OSes. Glue and Abstraction is what operating systems are there to do: software to hardware, hardware to hardware, so why not software to software?

Is dynamic code linking the best we can do? Can't system-level compilers learn that Mat44 and Matrix4x4 can be API-mapped without translation? Or come up with a simple way to let to three bits of middleware use the same set of runtime objects without requiring a recompile from [was it open?] source? COM-like systems just don't cut it, IMO. Putting the magic inside the transactions isn't good for performance. But other ways seem possible, and are perhaps still untried.

20.

Video games are enormously more complex and entertaining than they were 20 years ago, but I suspect that, when adjusted for inflation, they cost about the same.

Wow. Defender cost $20M in 2006 dollars?

21.

I'm not sure I trust the accounting rules of game companies, so retail unit price makes more sense to compare. Is my memory right that the Atari 2600 cost US$200 with $30 for a cartridge game? That was almost 30 years ago!

There's no structural problem with software. Companies target the same retail price they've always targeted and rising sales volumes (economy of scale) allow them to build bigger games.

Companies WANT a high barrier to entry because it reduces competition. Data is probably hard to find, but it would not surprise me if software costs are flat and the big increases are in artwork and level design. Artwork and level design are variable costs, right?

The comments to this entry are closed.