-
a client side IPC library that is battle-tested in cloud. It provides the following features: Load balancing; Fault tolerance; Multiple protocol (HTTP, TCP, UDP) support in an asynchronous and reactive model; Caching and batching.
I like the integration of Eureka and Hystrix in particular, although I would really like to read more about Eureka’s approach to availability during network partitions and CAP. https://groups.google.com/d/msg/eureka_netflix/LXKWoD14RFY/-5nElGl1OQ0J has some interesting discussion on the topic. It actually sounds like the Eureka approach is more correct than using ZK: ‘Eureka is available. ZooKeeper, while tolerant against single node failures, doesn’t react well to long partitioning events. For us, it’s vastly more important that we maintain an available registry than a necessary consistent registry. If us-east-1d sees 23 nodes, and us-east-1c sees 22 nodes for a little bit, that’s OK with us.’ See also http://ispyker.blogspot.ie/2013/12/zookeeper-as-cloud-native-service.html which corroborates this:I went into one of the instances and quickly did an iptables DROP on all packets coming from the other two instances. This would simulate an availability zone continuing to function, but that zone losing network connectivity to the other availability zones. What I saw was that the two other instances noticed that the first server “going away”, but they continued to function as they still saw a majority (66%). More interestingly the first instance noticed the other two servers “going away” dropping the ensemble availability to 33%. This caused the first server to stop serving requests to clients (not only writes, but also reads). […] To me this seems like a concern, as network partitions should be considered an event that should be survived. In this case (with this specific configuration of zookeeper) no new clients in that availability zone would be able to register themselves with consumers within the same availability zone. Adding more zookeeper instances to the ensemble wouldn’t help considering a balanced deployment as in this case the availability would always be majority (66%) and non-majority (33%).
(tags: netflix ribbon availability libraries java hystrix eureka aws ec2 load-balancing networking http tcp architecture clients ipc)
The Myth of Schema-less [NoSQL]
We don’t seem to gain much in terms of database flexibility. Is our application more flexible? I don’t think so. Even without our schema explicitly defined in our database, it’s there… somewhere. You simply have to search through hundreds of thousands of lines to find all the little bits of it. It has the potential to be in several places, making it harder to properly identify. The reality of these codebases is that they are error prone and rarely lack the necessary documentation. This problem is magnified when there are multiple codebases talking to the same database. This is not an uncommon practice for reporting or analytical purposes. Finally, all this “flexibility” rears its head in the same way that PHP and Javascript’s “neat” weak typing stabs you right in the face. There are some somethings you can be cavalier about, and some things you should be strict about. Your data model is one you absolutely need to be strict on. If a field should store an int, it should store nothing else. Not a string, not a picture of a horse, but an integer. It’s nice to know that I have my database doing type checking for me and I can expect a field to be the same type across all records. All this leads us to an undeniable fact: There is always a schema. Wearing “I don’t do schema” as a badge of honor is a complete joke and encourages a terrible development practice.
(tags: nosql databases storage schema strong-typing)
-
from yesterday’s AWS Summit in NYC:
Cheat sheet of EBS-optimized instances. http://t.co/vmTlhUtpWk Optimize your queue depth to achieve lower latency & highest IOPS. http://t.co/EO48oa0D6X When configuring your RAID, use a stripe size of 128KB or 256KB. http://t.co/N0ldtFJ4t6 Use larger block size to speed up the pre-warming process. http://t.co/8UoIeWE2px