Project Voldemort: measuring BDB space consumption
HOWTO measure this using the BDB-JE command line tools. this is exposed through JMX as the CleanerBacklog metric, too, I think, but good to bookmark just in case
(tags: voldemort cleaner bdb ops space storage monitoring debug)
rendering pcm with simulated phosphor persistence
This is something readily applicable to display of sampled time-series metric data — it really makes regular patterns visible (and is nicely retro to boot).
When PCM waveforms and similar function plots are displayed on screen, computational speed is often preferred over beauty and information content. For example, Audacity only draws the local maximum envelope amplitude and (what appears to be) RMS power when zoomed out, and when zoomed in, displays a very straightforward linear interpolation between samples. Analogue oscilloscopes, on the other hand, do things differently. An electron beam scans a phosphor screen at a constant X velocity, lighting a dot everywhere it hits. The dot brightness is proportional to the time the electron beam was directed at it. Because the X speed of the beam is constant and the Y position is modulated by the waveform, brightness gives information about the local derivative of the function. Now how cool is that? It looks like an X-ray of the signal. We can see right away that the beep is roughly a square wave, because there’s light on top and bottom of the oscillation envelope but mostly darkness in between. Minute changes in the harmonic content are also visible as interesting banding and ribbons.
(via an _amazing_ kragen post on ghetto electronics)(tags: via:kragen pcm waveforms oscilloscopes analog analogue dataviz time-series waves ui phosphor retro)
stuff Google has learned from their hiring data
A. On the hiring side, we found that [interview] brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart. Instead, what works well are structured behavioral interviews, where you have a consistent rubric for how you assess people, rather than having each interviewer just make stuff up. Behavioral interviewing also works — where you’re not giving someone a hypothetical, but you’re starting with a question like, “Give me an example of a time when you solved an analytically difficult problem.” The interesting thing about the behavioral interview is that when you ask somebody to speak to their own experience, and you drill into that, you get two kinds of information. One is you get to see how they actually interacted in a real-world situation, and the valuable “meta” information you get about the candidate is a sense of what they consider to be difficult.
This makes sense, and matches what I learned in Amazon. Bad news for Microsoft though! (Correction: Adam Shostack got in touch to note that MS haven’t done this for 10+ years either.)Also, I like this:
A. One of the things we’ve seen from all our data crunching is that G.P.A.’s are worthless as a criteria for hiring, and test scores are worthless — no correlation at all except for brand-new college grads, where there’s a slight correlation. Google famously used to ask everyone for a transcript and G.P.A.’s and test scores, but we don’t anymore, unless you’re just a few years out of school. We found that they don’t predict anything. What’s interesting is the proportion of people without any college education at Google has increased over time as well. So we have teams where you have 14 percent of the team made up of people who’ve never gone to college.
(tags: google hiring interviewing interviews brainteasers gpa microsoft star amazon)