Hacker Newsnew | past | comments | ask | show | jobs | submit | chadr's commentslogin

Yes


Semi related to this, is it possible to run multiple CPython interpreters in the same process, but with restrictions on shared memory? The idea being that each interpreter would still have its own GIL, each interpreter would have restrictions on shared memory (like sharing immutable structures only through message passing). Note, I'm not a big Python user so if this already exists, has been discussed, etc I am not aware.


I'm not deeply familiar with the CPython API, but that should be completely straightforward as long as you don't trade native CPython structures between interpreters. If you start with something like Numpy's array interface, and implement an array in C that knows that it could be accessed by multiple Python interpreters and needs to become immutable before it's passed, it should work. But I wouldn't expect it to be easy to take a regular PyObject and move it between interpreters, so the cost of serialization and deserialization might be an issue.


Sort of.

Note that Python has support for shared memory:

https://docs.python.org/2/library/multiprocessing.html#shari...

In fact, `numpy` has its own mechanisms to support shared memory between processes:

https://bitbucket.org/cleemesser/numpy-sharedmem

Neither of these approaches seem to be used very commonly in practice.

Python itself has some "sub-interpreter" support. There was a long conversation about this last year:

https://mail.python.org/pipermail/python-ideas/2015-June/034...

Finally, I have a working approach using `dlmopen` to host multiple interpreters within the same process:

https://gist.github.com/dutc/eba9b2f7980f400f6287

- the approach is so bizarre, because it's a very naïve multiple-embedding. It was intended to prove that you could run a Python 2 and a Python 3 together in the same process as part of a dare. This was thought impossible, since there are symbols with non-unique names that the dynamic linker would be unable to distinguish (which lead me to the `RTLD_DEEPBIND` flag for `dlopen`,) and that there is global state in a Python interpreter that interacts in undesirable ways (which lead me to `dlmopen` and linker namespaces.)

- this approach is stronger than the traditional subinterpreter approach, since I can host multiple interpreters of distinct versions. i.e., I can host a Python 1.5 inside a Python 2.7 inside a Python 3.5.

- the approach is stronger in that I completely isolate C libraries. There's a good amount of functionality provided by C libraries that maintain global state. e.g., `locale.setlocale` is a wrapping of C stdlib locale and is globally scoped.

- this approach is weaker in that it requires a dynamic linker that supports linker namespaces, which effectively limits its use on Windows

- this approach is weaker in that it's not complete: there's insufficient interest in this approach for me to actually write the shims to allow communication between processes.

- this approach is weaker in that it has some weird restrictions such as being able to spawn only 15 sub-interpreters before running out of thread-local storage space

I suppose the premise is that the GIL-removal efforts involve pessimistic coördination. A sub-interpreter approach might have a lighter touch and allow the user to handle coördination between processes (perhaps even requiring/allowing them to handle locks themselves.)


I followed the sub-interpreter thread with great interest, but after the implementation went in I haven't seen anyone build the kind of multiprocess tooling that it was designed to enable. Have you heard of anything?


How long do they plan on supporting this fork? Hopefully for multiple years? I didn't see it mentioned on the site.


They're contractually obligated to support it for at least a year for me, and after that commercially viable software is commercially viable software.


You've sold me on it. It is well worth the money considering the time/effort it saves.



from Wikipedia... "The landing sequence alone requires six vehicle configurations, 76 pyrotechnic devices, the largest supersonic parachute ever built, and more than 500,000 lines of code."


They check for a few more error cases than the average project I guess, hehe.

  am looking forward to seeing it land successfully!
Regardless of how it lands though, it should be exciting to watch.


How do you even test for something like this?


I can't give you an exact answer, but I can tell you this... out of the 224,133 apps I track in the android market, here is the breakdown of how many installs they have as of this morning: http://dl.dropbox.com/u/131/installs.csv


thanks! Very interesting. Looks like a non too small percentage of apps conquerer a pretty large user base...


Our artists get to choose the technique that works best for them. A lot of it is done using the computer, but some of it is done by hand and scanned. The coolest bit of art tech is definitely the Wacom Cintiq. It lets the artists draw on top of an LCD panel.


No official Android plans as of yet, but it is something we are thinking about. Also thanks for the compliment on the art. We do all the art in-house and I passed your message on to the team.


High quality sysadmins are evolving into what is called the devops role. Trouble shooting, scaling, architecting, and automating production systems are just a few areas where devops people shine. The cloud just provides them another set of tools to work with. It also frees them from dealing with the annoying/repetitive tasks (spinning a CD to install the OS, plugging in the network cables, etc) and allows them to focus on improving the application. A number of devops people I know can easily transition into developer roles when required. Summary: a great sysadmin should know how to code and does so in order to improve the app.


Agreed, in manufacturing there are engineers that design the product (currently called developers) and there engineers that design the assembly line (currently called sysadmins).

When Toyota retools a factory, they don't have robots build/deploy the robots, there are engineers that "re-tool". This is exactly what is happening at Google/Facebook/Twitter, etc. There are maintenance guys, aka NOC monkeys, and there are engineers.

Eventually, everyone will need to be an engineer and that is where systems administration is going with the devops movement.

Here is a presentation I gave on it and you tell me if sysadmins are going away. http://crunchtools.com/wp-content/uploads/2010/04/DevOps.pdf


DevOps! That’s a new terms for me and I love it! I'm a Systems Analyst by title but DevOPS better describes what I actually do. Aside from supporting implementation and configuration I'm also Project managing , QA and coding regularly to resolve the short comings of our system. I’ll be using this term more regularly.


It definitely sounds like you wear multiple hats at your job. Shoot me an email... I'm interested in hearing more about what you do.


That still feels like "sysadmins are being replaced by developers" to me


I think you're missing the point. Many developers don't know how to develop systems that will work well in production. At the same time, many sysadmins know only about the OS, hardware, and network. Great sysadmins are learning to combine the two skill sets. It's clearly a hybrid role and not a case of one replacing the other.


Only developers think sysadmins arent developers.


Conversely only sysadmins think sysadmins can code. A classic example is a close personal friend of mine who works as an sysadmin for an insurance company.

On two occasions I've had the unfortunate displeasure of working with him on web-related projects (we share similar hobbies and our local communities need web services of various kinds) his approach has been vile, Rube Goldbergesque concretions of Perl & shell scripts.

This is absolutely THE GUY I'd call if I needed some advanced logfile parsing in a hurry, but when it comes to actually developing web stuff the guy's totally in the dark. What's worse is his only metric is "It works and it makes sense to me" so he's content to make a complete hash of a development project.

During a recent conversation he admitted to having no real grasp on basic HTML and hadn't heard of CSS. The thing that kills me is he's convinced he's qualified to run websites for our local groups and absolutely refuses to accept input on the subject.

Oh well, the guy's also an ace whitewater kayaker and one of my favorite paddling buddies. Considering this guy will probably save me from drowning one day I can look past his failings as a "developer".


Why don't you trust Passenger? I'm severing about 6 million page views/day with it and it has been extremely reliable and easy to work with.


The most compelling use case I've seen where Passenger loses to Unicorn is rolling restarts. The biggest trust issue that I've seen is that of source editability - Unicorn and Mongrel are easier to debug and patch, while Passenger requires a commanding knowledge of Apache to fix.


Do you routinely edit the source code of your web server? Just curious since I've never needed to do that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: