#AltDevBlog » Parallel Implementations
John Carmack describes this code-evolution approach to adding new code:
The last two times I did this, I got the software rendering code running on the new platform first, so everything could be tested out at low frame rates, then implemented the hardware accelerated version in parallel, setting things up so you could instantly switch between the two at any time. For a mobile OpenGL ES application being developed on a windows simulator, I opened a completely separate window for the accelerated view, letting me see it simultaneously with the original software implementation. This was a very significant development win. If the task you are working on can be expressed as a pure function that simply processes input parameters into a return structure, it is easy to switch it out for different implementations. If it is a system that maintains internal state or has multiple entry points, you have to be a bit more careful about switching it in and out. If it is a gnarly mess with lots of internal callouts to other systems to maintain parallel state changes, then you have some cleanup to do before trying a parallel implementation. There are two general classes of parallel implementations I work with: The reference implementation, which is much smaller and simpler, but will be maintained continuously, and the experimental implementation, where you expect one version to “win” and consign the other implementation to source control in a couple weeks after you have some confidence that it is both fully functional and a real improvement. It is completely reasonable to violate some generally good coding rules while building an experimental implementation – copy, paste, and find-replace rename is actually a good way to start. Code fearlessly on the copy, while the original remains fully functional and unmolested. It is often tempting to shortcut this by passing in some kind of option flag to existing code, rather than enabling a full parallel implementation. It is a grey area, but I have been tending to find the extra path complexity with the flag approach often leads to messing up both versions as you work, and you usually compromise both implementations to some degree.
(via Marc)(tags: via:marc coding john-carmack parallel development evolution lifecycle project-management)
5 Reasons to Use Protocol Buffers Instead of JSON For Your Next Service
A good writeup of the case for protobuf > JSON (via Marc)
(tags: via:marc api soa web-services protobuf json interop protocols marshalling)
Plumbr.eu’s reference page for java.lang.OutOfMemoryError
With examples of each possible cause of a Java OOM, and suggested workarounds. succinct
-
Imagine buying a high-end Core i7 or AMD CPU, opening the box, and finding a midrange part sitting there with an asterisk and the label “Performs Just Like Our High End CPU In Single-Threaded SuperPi!”
(tags: ssd storage hardware sketchy kingston pny bait-and-switch components vendors via:hn)
-
Manages migrations for your Cassandra data stores. Pillar grew from a desire to automatically manage Cassandra schema as code. Managing schema as code enables automated build and deployment, a foundational practice for an organization striving to achieve Continuous Delivery. Pillar is to Cassandra what Rails ActiveRecord migrations or Play Evolutions are to relational databases with one key difference: Pillar is completely independent from any application development framework.
(tags: migrations database ops pillar cassandra activerecord scala continuous-delivery automation build)
How to use TuneIn’s Record Timer feature
handy
Continuous Deployment for Mobile Apps with Jenkins: iOS Builds
the CloudBees-std way
(tags: build deployment ios jenkins iphone continuous-deployment)