Hacker Newsnew | past | comments | ask | show | jobs | submit | caraboga's commentslogin

NetBSD and OpenWRT. These platforms build across archs relatively easily and their configuration interfaces are pretty transparent.


Great examples.


I've been playing around with this:

https://app.zyptonite.com/

The login infrastructure uses a standard auth service, but once that gets going, seems like traffic goes directly from client to client. Avoids the peer discovery but once both (or more) clients know where everyone is, the transport is from client to client (at least it looks that way from tcpdump).


I think this is just embracing and extending. VMWare has the on premise infrastructure market cornered for the most part. But if they don't leverage that and pivot into using off premise stuff, then they stand to lose ground to Azure, as you can orchestrate just about all of Azure from Active Directory/Powershell.

This is a good move on their part.


Openstack doesn't have the in-product hooks to detect hardware failures on the host machine to prompt an automatic, vm-state preserving live migration to other hosts.

If you are using kvm w.r.t. openstack, the vm machine actually is suspended by way of acpi, then moved, then unfrozen again. VMware doesn't do that in vmotion. All your problems with clock skew, dropping of network traffic are lessened (they don't completely go away) with vmotion.


Openstack swift isn't great, but if you move the proxy/account/container services away from the actual object store, as the read/write requirements are different, and if you opt to run your object store on proper hardware, it will scale well pretty well.

I'm curious as to what really made you hate it. From my vantage point of running openstack in production from Essex to Juno, Openstack Swift after Folsom release was the most reliable part of openstack outside of keystone and glance.

(Nevermind that it was also the easiest to upgrade as well).

Or you can use the ceph stuff, which is also pretty nice.


Having inherited an openstack install with the swift implementation implemented on earlier generation of pods, these are the issues I and my cohorts ran into:

1) The object replication between the pods will be hard on the array, as the object replication is simply an rsync wrapped inside nested for loops. If you have a bunch of small files with a lot of tenants, it'll hurt.

2) While swift allows you to simply unmount a bad disk, change around the ring files and let replication do its thing, there are actual issues. First of all, out of band smart monitoring of bad sectors actually cases the disk to pre-empt some sata commands and do the smart checks first. On a heavily loaded cluster, under a smart check for a bad block count could kill the drive, and take out the sata controller along with it. We've downed storage pods before by that way. The only way we got around it was to take a pod offline once a week, run all our smart checks, then put it back.

3) To replace a drive, you have to open the machine up and power it off. As any old operator will tell you, drives that have been running for awhile do not like to be turned off. If you are to power off a machine to replace a bad drive, do realize that you might actually break more drives just from the power cycle.

4) Once you change a drive out and use the ring replication of files to rebuild your storage pods, your entire storage cluster will take a non-trivial hit.

5) Last but not least, it is almost of tantamount importance to move the proxy, account and container services away from the hardware that also host the object servers. It's probably good to note that the account and container meta data is stored in sqlite files, with fsync writes. If you add and remove a bunch of files to multiple accounts, the container service is going to get hit first, then the object service. Further more, every single transaction to every meta data/data service, including replication, is federated through the proxy servers. If you look at a swift cluster, the proxy services take up a large chunk of the processing space.

Source: Was Openstack admin for a research group in Middle Tennessee. Ran an openstack cluster with 5 gen1.5 pods for the entire swift service, then moved account/container/proxy to three 12 core 2630s with 64 gigs of ram. Cluster was for a DARPA vehicular design project, first part fielded 3000 clients, second part fielded about 1/10 of that, but with more files and bigger files (these were cad and test results respectively).


Thanks caraboga. I was contemplating this setup a year or so ago, and great to learn your insights from actual implementation. It looked to me like Supermicro JBOD enclosures would be superior (mainly hot swappability) if a bit more expensive, would you agree?


My predecessor went with Backblaze clones for the drive and mb enclosure. You'll run into issues with the backblaze way as they have two power supplies, but one is for the motherboard and the boot drives, and the other power supply is for the actual drives. Further more, as this was a backblaze pod clone, there's no ipmi on the pod motherboards. It makes certain things a bit more annoying than they have to be.

If I were to do it again, I would stay away from using enclosures that were inspired by the BackBlaze models. Supermicro enclosures are fine.


I'd be curious to know what drives and controllers you had that pre-empted requests due to SMART queries...


I don't recall the model, but they were 3 terabyte Seagates that were bought 3 years ago. I think my predecessor employed the same controller cards as version 1 of the pods. You could tell that the smart queries pre-empted normal io as the smart query would be executed, and disk activity for certain swift object servers would just stall. The object-servers would not return requests for at least some of the drives until the smart query was finished.

Curiously enough, I've also ran smartctl against Samsung 840Pros using Highpoint RocketRaid controller cards during the same project. Sometimes this crashes the controller card.

(This was for a gluster cluster, when gluster had broken quorum support. A copy of a piece of data, when out of sync with the other copies in a replica set, would be left unchecked. This was produced after one smartctl command took out the controller card, and then the machine along with it.)


DARPA is the wrong agency to follow when it comes to intelligence funding. If you want to look at intelligence funding, DARPA is the wrong source. Check out IARPA instead. http://www.iarpa.gov/ These guys are the public intelligence research arm and researchers report to the Program Manager. Some of the programs are interesting. Some of them are downright... dodgy.


Is there a point of contact I could employ if I have questions about a position?


Wade's an easier book that has more undergraduate aids. http://www.amazon.com/Introduction-Analysis-Edition-William-...

I also like Strang for Linear Algebra for undergrads. Beyond that it's Hoffman and Kunze.


I think you need to dismiss the myth that your 'forming' years are behind you.

I work for a few research groups at a large, prominent southern university. There are people who get engineering phds in their 30s and 40s. You have coded a large chunk of your life and have an excellent framework to bolt stuff on to.

There is always a job market for an enthused developer that doesn't believe in his own hype. You don't have to announce you are not among the best, I don't think a lot of people who think they are the best are anywhere close. I write much better code now when I am in my 30s than when I was in my 20s. If you stay in touch with technologies and you build stuff for yourself that you are willing to show people, then your prospects should look pretty good.

Best of luck.


I guess what I meant is I see a lot of guys in the group who are younger (20-25) with more senior guys trying to "form" them into what they feel is a "good" developer. I've been around long enough to have developed my own ideas, and while I'm always open to having my views influenced and changed over time, I'm not going to sit down and have someone tell me "this is what you need to do to be good like me".

Your note about building stuff on your own is something I constantly do. I try to keep at least a couple projects going at home for the purpose of practice and incorporation of advancing technology. I don't have my finger on the pulse of everything, but I'm by no means being left behind.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: