Hacker News new | past | comments | ask | show | jobs | submit | Yxogenium's comments login

On a related note, I found that asyncio (cooperative multitasking in Python) has made huge strides in usability in Python 3.7. The code feels sequential, and it’s quite easy to understand the flow of events ; it’s very explicit yet you never have to run the callbacks yourself, the exceptions do show up, and you don’t even need to manipulate a `loop` object anymore! If you're trying to learn how to use it, my advice would be:

* "Stick to the official documentation[1]!". Many resources found online are outdated, and give convoluted or plain wrong examples, and fail to mention the recent additions.

* Use `asyncio.run`. The alternative way (using get_event_loop() and loop.run_until_complete() is cumbersome and hard to get right. Even the documentation wasn't correct[2].). The part on Coroutines and Tasks is well-written, and the best part is the examples. Simply reading all the examples on after another gives a a good insight on how asyncio should be used.

* If you want to work with sockets, use the high-level "Streams"[3] if you want to stay in the standard library. `asyncio.start_server` is a powerful abstraction. If you’re willing to use a third-party library, I found that `pynng`[4] was a breeze to work with. It is compatible with asyncio and other async frameworks, and it found it more straightforward than pyzmq, which is also compatible with asyncio[5].

* If you want to run async functions in the main event loop and blocking functions in threads (with `loop.run_in_executor), Janus[7] seems to be a great way to share data. I have not used it yet though.

My use case was that I wanted to read sensor data on one computer (server) and broadcast it to other computers (clients) which would in turn graph it live, or write it to disk. The server used `pyserial_asyncio`[6] to asynchronously read serial data and published it via TCP using a the pub/sub scheme from pynng (pynng.Pub0).

The clients could then either synchronously or asynchronously receive the data by subscribing to the server (pynng.Sub0), and make plots in realtime.

[1]: https://docs.python.org/3/library/asyncio.html

[2]: https://github.com/python/asyncio/pull/465#issue-93620963

[3]: https://docs.python.org/3/library/asyncio-stream.html#asynci...

[4]: https://pypi.org/project/pynng/

[5]: https://pyzmq.readthedocs.io/en/latest/api/zmq.asyncio.html

[6]: https://github.com/pyserial/pyserial-asyncio

[7]: https://github.com/aio-libs/janus


I agree, and the docs got better too. In 3.7, asyncio is usable for people that don't understand it very well.

Before that, you had to learn the whole thing brick by brick before being able to do something serious.

Yet, there is a missing piece I'm hoping we'll see in 3.8: a way to limit the scope of `asyncio.ensure_future()`.

Indeed, right now either you `await` to get a sequential execution , or you call `asyncio.ensure_future()` to get a concurrent one. The later, unfortunatly, is the equivalent of a GOTO, and worse, it can contain a GOTO itself (see https://vorpus.org/blog/notes-on-structured-concurrency-or-g...).

So the best practice is to use `asyncio.gather()` to delimitate the pyramid of the task life cycle. Unfortunatly few people kow this, and hence do it. Plus it is not fun to do, it's boring boilerplate, something Python usually frees you of.

Yuri is thinking about how to implement the trio solution (the infamous nursery) in uvloop, and if he does, we usually get the feature ported to the stdlib a year later.

Meanwhile, I noted that a simple wrapper does meet the Pareto requirement: https://github.com/Tygs/ayo/

You can see it's not really hard to write your own version of it if you need to. It helped me a lot: the code is easier to reason about, and you remove a lot of edge cases.

I'll have to test Janus, it seems super nice.


> So the best practice is to use `asyncio.gather()` to delimitate the pyramid of the task life cycle. Unfortunatly few people kow this, and hence do it.

And even then, any async function might run `loop = asyncio.get_event_loop()`, run some background processes, and return before they stopped! I actually had this exact problem with my realtime sensor data server: the background tasks were never properly closed, and the sockets remained open.

The article you linked to (https://vorpus.org/blog/notes-on-structured-concurrency-or-g...) is super interesting. I didn't get the appeal of `trio` before, but now it does seem really useful.


> Starting from an array A that has n distinct integers

I don't know in what ways they are differents, but these programs were not designed to work with duplicates in the input. This probably explains the results.


Yeah you're right


And this issue could be easily overcome if more projects implemented the "socks5h://" protocol (note the 'h'), initially introduced by curl[1] to mandate the SOCKS proxy to also tunnel DNS requests. Sadly it seems that very few programs recognize this (non-standard) protocol: git[2] (using curl), python's request (and urllib) module[3] ... And not much else.

[1]: https://curl.haxx.se/docs/manpage.html#--proxy [2]: https://github.com/git/git/blob/20fed7cad40ed0b96232feb82812... [3]: http://docs.python-requests.org/en/master/user/advanced/#soc...


Firefox over a SOCKS5 proxy done with ssh passes this test if you enable the proxy DNS setting. Creating the proxy is as simple as:

  ssh -D 9999 -q -N <your ssh server>
and then configure that in the firefox proxy settings (socks to localhost:9999). If you want a simple way to enable/disable this in firefox I built a minimal extension to do it:

https://addons.mozilla.org/en-US/firefox/addon/proxyswitcher...

The defaults in the config already match that ssh line so all you need to do is press the globe button to enable the proxy.


You might be the right person to ask:

When I tried shadowsocks and enabled "proxy dns" in Firefox, every website became painfully slow. Is this simply because no DNS cache had been built?


Don't know but note that shadowsocks is not a traditional socks proxy. If I understood it correctly you do a traditional socks proxy to localhost and then a more heavily encrypted link to the actual host. Maybe that second link was slow, either because of the encryption or because it's written in python? ssh gives you as much or more security anyway as there are few protocols as thoroughly checked than ssh and performance seems nice.


Huh? VPNs aren't SOCKS proxies.


For your information you can get the Firefox for Android NoScript porting with the cheeky (but somewhat relevant) name NoScript Anywhere++ (NSA++) here: https://noscript.net/nsa/ It's quite experimental, the UI isn't great if you have a small phone but it works.


If you feel brave, you could try the C++ implementation of the i2p router, purplei2p[1] (aka i2pd[2]). Last time I tried, there were a few rough edges, but it is now over two years old, so it has probably improved (or you may even improve it yourself!).

[1] https://purplei2p.github.io/ [2] https://github.com/PurpleI2P/i2pd


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: