I am archiving older pieces I have written on other sites, making this the definitive home for all my work. This is one of several I am porting over from my GameDev.Net user journal. Enjoy!
Okay, TickerWatch is done with. No, I wasn't able to complete it before I left Maryland, but who really cares? It was a vanity project, and it got my point across. I sat down with the guy I guess I can call the CTO (relatively smally company) and outlined how it worked, and sent him the sources to both the old and new versions of TickerWatch. Have fun, buddy!
Back In New York!
Well, Long Island (if you were thinking NYC). Moving in and all that, so it's quite a scattered time. I should be updating my journal roughly daily, though.
My already non-functional laptop suffered further damage during my trip back from Maryland. I was using the restroom at the airport (I missed my flight - at the wrong airport - got placed on stand-by and had to wait for the second flight afterwards, but it all worked out nicely) and had the laptop in a carrycase hanging from a hook on the door when it fell! Cracked a bit of the case on the screen, but no other apparent damage. Time to dig out them warranty papers...
Worst-case scenario, I'll have to buy another computer. I'm debating the latop/desktop thingie. Being so close to school and having such a packed calendar this year, a desktop makes a lot of sense - especially since there are machines on campus and I have broadband at home. However, being my senior year, I'd rather not be saddled with the depreciating equity of a 10-month old laptop when I graduate, since I plan to head out on the open road for quite a while (I've been considering doing the GameDev US Tour, where I have GDNet members sign up for me to interview them on camera when I pass through/near their towns. I'll keep y'all posted).
An idea that's come to mind is to simply lease a desktop instead. Pay $50 to $100 per month for the use of a nicely-equipped workstation, and return it to the lessor when I'm moving out. I'm thinking that I should be able to persuade the screwdriver shop down the road to do it, too, since they probably have parts that never get sold each sales period...
Okay, enough personal life whinging. Let's get to the projects.
Scalable Software Renderer
I've been out of serious game and graphics programming for a while, and I'd like to get back into it. Before going ahead and writing some cutting-edge, pixel- and vertex-shaded mindbender, I decided that I'd write a software renderer in C# to refresh all the skills. Since I can't remember the details of Warnock's algorithm right now, that sounds like a good idea.
The additional rationale is to provide a fallback device for games I make in the near future, interface-compatible with my higher-end render system, so that all users who can install the .NET Framework will be able to play my game, whether they have the hardware for fancier effects or not.
The renderer is supposed to profile itself based on available processing power and automatically/dynamically scale available functionality. That is to say if I wrote the ability for it do cartoon rendering (cel shading, etc) but the hardware couldn't quite handle it, it could drop just the cel shading portion of the pipeline - all without user or programmer intervention. Using reflection, it would also expose its interfaces to the developer console, meaning that a developer could tweak it while running to substitute one or more effect for another, making more judicious use of available cycles (e.g. retain the cel shading, drop Phong or Gouraud, etc.)
Of particular importance to me is creating a usable component. I think that the reflection possibilities of .NET, coupled with the ability to compile code relatively on-the-fly yields an opportunity for really drop-in components, where a visual designer approach can be used for integrating gaming pieces a la Windows Forms. SSR will give me a chance to test this hypothesis.
This is important as a means to combat the difficulties highlighted in this thread, per the preference for reinvention being rooted in the marginal benefit in investing time and effort to learn an existing, often imperfect-fit "engine."
This is the biggie.
Those familiar with some of my more recent posts (as well as some of my older threads on Next Generation Computing) know that I've been professing dissatisfaction with the state of computing, inviting - nay, soliciting ideas and discourse on how we can improve the situation, particularly by leveraging the opportunity of Open Source software development. Well, ReComputing is going to be my stab at the problem.
ReComputing is going to be an 11-month project, at the end of which (July 30, 2005) the sources will be opened, regardless of state/progress. The objective is to create a "graphical user environment" organized around the various tasks that the user wishes to perform (as opposed to the applications performing said tasks) and eliminating as much tedium from processes as possible.
- Filesystem-level versioning. Rollback changes, view previous versions of files, branch, merge, diff - all built into the filesystem as opposed to individual applications.
- Component/Codec architecture. Inspired by MIME, data is broadly classified into five main categories: text, image, video, audio, and application (for application-specific processing, a sort of catch-all). Underneath these headings are slots to be filled by codecs that read and write specific formats for each category, such as HTML (text/html), AVI (video/avi), OGG (audio/ogg) and so forth.
Applications (more on the definition of "application" in a second) will be built from components that handle the root type, meaning that the installation of a new format codec enables all applications handling its root type to process that format.
- Document-centric processing. Or, "Death to Applications!"
If I'm editing an image, I don't really care if it's in Adobe Photoshop(TM), Corel DRAW!(TM) or Microsoft Paint; I just want to edit my image. The only reason we distinguish between these tools is because of the workflow they provide and their toolsets. The objective here is to provide type-specific (text, audio, video, image, etc) "containers", for lack of a better word, into which functionality can be plugged in. This would also work in aggregate documents, meaning that I could edit the text with all the features of a QuarkXpress; modify the adjacent image with all the tools of a Photoshop; tweak the background sound with Cakewalk/Sonic Forge/whatever-level professionalism, all without thinking about the "application" or switching windows if I so chose.
In essence, the workflow is to be designed to revolve around the user. I believe that it is ethical to "waste" processing cycles to make using the software more intuitive and transparent to the user, so this is a central tenet.
- No document saving or naming required. There's no reason why a user should ever save. Data can be automatically serialized - and pruned per specific parameters if storage is at a premium - all without user intervention. Of course, all of this will be configurable, so if you wish to persist in naming every little doodle and managing your memory yourself, you may. This idea is typically fairly controversial, so I figure I might as well explain in greater depth.
First, saving. All user data, even a meaningless squiggle equivalent to fingerpainting by a 36-year old, will be saved. The user will not be prompted for a name for the data unless he has indicated that he wishes to be, and this can be configured per data type (so you can name code files but not drawings, or vice versa, or whatever).
The user can also set up a data pruning policy, which only comes into play when storage is at a premium. This pruning policy will affect data (files) as a whole as well as revision info, and will be minutely configurable for the advanced/power user.
Second, naming. The primary mode of file identification will be metadata, including the type(s) of data within the file - image, audio, etc; the dates of creation and modification; frequency of access; size; the file contents, and so forth. Of particular importance is that last entry, the file contents. Particularly for text, rudimentary file analysis will "guesstimate" what data are related, allowing groupings ("Collections") to be created automatically, enhancing navigability without necessitating elaborate user naming and filing schemes.
Naming will only be required when exporting a file for other platforms and the web.
- No more directories/folders. Personally, I hate deep nesting, and I hate having to search to find files - and then finding them in some obscure location. Personal justification is not enough, though, so the rationale here is that directories tend to be broad trees and are quite intimidating to non-expert computer users. Move a file to another folder in Windows and watch as your secretary ages prematurely, muttering "But it was here last night!"
The Collections properties described above will layout implicit folders that are navigable, but also flexible in terms of content. What that means is that a file may show up in multiple Collections due to sharing significant properties with other members of the Collection, allowing the same file to be address in many different ways (as opposed to a single, canonical path). In addition, related Collections will be displayed based on fewer shared properties, with All Collections always being the root. Audio Files, for example, constitute an explicit Collection; Music Audio Files with metadata (ripped CDs, MP3s with tags, WMAs, AACs, etc) constitute a hierarchy of explicit Collections that can be viewed in multiple ways - All Music, Music by Album, Music by Artist, Music by Year, Music by Decade, etc.
It has been theorized that using this approach, every piece of data is only three "pivots" away from any other piece of data. That remains to be seen in practice.
I'm starting to experience finger fatigue. Tomorrow I'll talk about the execution plan for ReComputing, including how I plan to make it significant in this 95% Windows World.