Hacker News new | past | comments | ask | show | jobs | submit login
Learning Python Without Library Overload (chrisconlan.com)
139 points by happy-go-lucky on Sept 4, 2017 | hide | past | favorite | 76 comments



I see quite a few people disagreeing with the author but I think there's a key difference between:

a.) Learning python when you are also a beginner to programming

b.) Learning python when you're comfortable in another language

If, like the student in the story, you are in category a.) then I think the author is definitely right. The distractions of myriad libraries isn't helpful.

However, for category b.) it depends a lot more. I think there is value in understanding the core of a language but - depending on experience, familiarity with similar languages, end goal etc - I think we're talking between a few hours and a first sketch at a project before you're hindering yourself by ignoring the rest of the ecosystem.


I notice that the article says not to use urllib, but doesn't say what to use instead if you need an HTTP client. The correct answer, of course, is requests...which isn't in the standard library. So there's a tradeoff between a good API that you have to figure out how to install, and a lousy API that's universally available. (This tradeoff isn't as bad as it once was—the Python package management experience has improved a whole lot over the past decade—but getting packages installed is still a barrier for new users, especially on Windows.) It's not just a matter of saying "use the stdlib"; the stdlib is imperfect and its flaws are not limited to the concerns of power users.

Also, it's worth separating the issue "this library has an overcomplicated API surface" from the issue "this library uses Python magic in ways that make its semantics different from the default expectations for a Python API". For instance, there's nothing remotely magical about urllib; it's just unwieldy to use. Most of the other listed packages, like pandas, have the opposite problem.


While I understand the argument, I also have a big counter-point to offer.

Python is one of the top beginner languages for people new to programming, and rightfully so.

But beginners often want to build something "cool" and tangible, and need that to stay engaged.

Printing out the first 100 prime numbers after implementing a sieve of eratosthenes or a command line calculator do not fall into this category for most.

Building a website or a GUI application is way more exciting, and you need libraries for those.


I never was huge on Python but always preferred it to Ruby. One web framework that always fascinated me was CherryPy[0]. On my first (and current) real programming job I was asked to work on a RESTful type of project, and was given free reign to research a language and stack. I initially was asked if I liked Python, once I spotted interest towards Python I suggested CherryPy. I stuck with CherryPy and 4 months later, my RESTful service had a front-end using the Mako[1] templating library alongside it's RESTful back-end. All the code is done in an Object Oriented fashion. I can't imagine doing the same work without any external libraries.

[0]: http://cherrypy.org/

[1]: http://www.makotemplates.org/

Note: Mako is used by Reddit for their template library.

https://github.com/reddit/reddit/blob/master/r2/setup.py#L62


How does CherryPy compare to Flask in your experience?


I can't say too much about Flask since I have not used it as much as I have CherryPy. I would like to try out Flask at least once for a big project for comparison but don't have the time to do so yet.


theres an irc with 2000 python devs ready for instant feedback. it's like a live chat stackoverflow for python or better.

given the python community imo there's no reasons for beginners to shy from libraries. ESPECIALLY python libraries and frameworks, e.g. django is magic and can easily be learned/used by a dev who only has basic python knowledge


Magic impedes learning. Magic is great when you know enough to be able to peek behind the curtain when you need to. But abstractions leak, and magic abstractions more than most.


Isn't Python itself, like most high level languages, a large collection of abstractions, some quite magic?


Absolutely it is. I'm a believer that learning programming with a high level interpreted language with a weak type system is a disadvantage in the long run.

That doesn't mesh with people who want to do something valuable out the gate, which is fair enough. But I've seen so many people start with the "easier" path and run into a brick wall they can't surmount later and just give up. It makes me sad to watch every time.

EDIT: Substituted "learning a high level" with "learning programming with a high level"


I wonder if the transition is the problem, or if some people just have difficulty with the low level stuff, regardless of previous experience. I can't say I have data, but from my college colleagues, I don't remember the people having trouble with, say, pointers being mostly the ones who already programmed in Python or JavaScript.


That's a good point, and it might indeed be a self-selection kind of thing. I'm not an expert and I don't have the data there.

But I don't even mean making a transition, you can do nothing but Python or JS and still hit the wall. Because eventually you'll need to understand the lower level concepts to do better, even with higher level languages and libraries.

It's the reason that learning to program, and program well, is not 100x faster than it was 50 years ago, despite all the claimed benefits of this or that language or framework or whatever. Eventually you need to know what's going on under the hood.

Also worth noting that I primarily use JS in my day to day work, and it does what I need it to do. Not saying super high level languages aren't useful, just that they're not a good starting point IMO.


I think leaky abstractions are not really something to worry about when you're learning. Maintaining interest while learning, especially at first when everything is strange and new, strikes me as a much bigger problem. Magic addresses that.


Sure, but when you encounter a leaky abstraction someone has to pull back the curtain for you and help you get past it. Magic inevitably has brick walls behind the curtains.


Could you expand on how /where to join this irc?


Likely #python on FreeNode:

https://www.python.org/community/irc/


I really like the CSV statistics example project in the post. That IS a cool tool if you are learning in a science or data processing context.


As I read this, I originally disagreed with the author - for me the challenge of a new language is not the language itself but becoming familiar with the libraries that everyone uses and understanding how to get things done with them - but then I saw this:

>> packages that alter its syntax and behavior

Is this kind of thing common with Python? I'm used to C#, where libraries are just libraries. Sure, you still have to learn your way around them, but they don't change the language out from under you. Even in big frameworks that completely dictate the overarching organization of your code, it's still C# you're writing.


Python has operator overloading like C#, which, if used in relatively more interesting ways, can cause code using these APIs to look pretty different from "standard" Python. Python also allows overloading (i.e., defining new semantics for) more interesting things like field lookups and class definitions. This is extremely powerful and lets you do various useful things that are impossible in C#, but it also means you can't take much of anything for granted when using someone else's API, as there might be "spooky action at a distance". And so there's a real cost to learnability and maintainability.

There is something of a culture in Python of not being overly clever, which keeps this stuff in check. (Contrast Ruby, which offers equally powerful language facilities without the cultural norm of not using them for crazy things.) But for relatively all-encompassing frameworks, the ergonomic benefits are often too great to ignore, so you start to see things like Django models and class-based views, which tend to look and feel quite different from "regular" Python classes.


> There is something of a culture in Python of not being overly clever, which keeps this stuff in check. (Contrast Ruby, which offers equally powerful language facilities without the cultural norm of not using them for crazy things.)

Have you seen namedtuple? It's part of the stdlib… and metaclasses are designed for devs to do wildly unpredictable actions when creating instances (not to mention python's OO layer is basically a clever hack).


"Keeping in check" is not the same as "not allowing". It has metaclasses, but they're rarely used. namedtuples are also a rare case in a large stdlib.

How is the OO layer a clever hack?


Most other OO languages can pass self implicitly, and don't distinguish between initialization & construction.

> There is something of a culture in Python of not being overly clever

This is the part I disagree with. So much python I read in apps, libraries, web frameworks, and pythonista tweets is needlessly clever.


Most other OO languages can pass self implicitly

That's not an hack, it's a conscious decision[1]. It actually avoids having to implement hacks, in the case of decorators; for example, @classmethod and @staticmethod can be implemented in pure Python thanks to that. It also avoids all the crap that litters JavaScript code, like "var self = this" and bind(), since you can just use different names to those variables.

Implicit "this" works in Java and similar because the language is so restricted anyway that an explicit version would be useless.

and don't distinguish between initialization & construction.

Fair enough, but are they able to do what __new__ does?

This is the part I disagree with. So much python I read in apps, libraries, web frameworks, and pythonista tweets is needlessly clever.

But compared to which languages?

[1] http://neopythonic.blogspot.be/2008/10/why-explicit-self-has...


From that link:

> There's no way without knowing what the decorator does whether to endow the method being defined with an implicit 'self' argument or not.

How about letting the code object decide if it's a method or a function, using a meta-object protocol to track when a code object is a method? Frankly, Guido's arguments boil down to "I can't see a nice way to not have it", not "here's a good reason to prefer this style".

> But compared to which languages?

Plain python.


How about letting the code object decide if it's a method or a function, using a meta-object protocol to track when a code object is a method? Frankly, Guido's arguments boil down to "I can't see a nice way to not have it", not "here's a good reason to prefer this style".

That would add complexity to the implementation and use to solve a specific problem, while leaving others unsolved. In any case, disagreeing with a decision doesn't make it a hack.

Plain python.

Fine, but I think the claim is that Python's culture avoids being overly clever compared to the culture of languages that offer similar possibilities. Being overly clever in your personal view is inherently subjective and therefore unarguable.


> That would add complexity to the implementation

That's the job of a good language: to move the necessary complexity to the core, so users of the language do not have to address it by being clever.

> Being overly clever in your personal view is inherently subjective and therefore unarguable

non sequitur: I write the most boring plain python that can get the job done. I see other prolific python devs reveling in writing obtuse code (probably as jokes? frustrated standup comics?)


>> There is something of a culture in Python of not being overly clever >This is the part I disagree with. So much python I read in apps, libraries, web frameworks, and pythonista tweets is needlessly clever.

Can you give some examples? I'm not saying you're right or wrong, would just like to see some.


I gave examples in the GP of the post you’re replying to: namedtuple and metaclasses.


No, it's not super common, IMHO.

Python doesn't really even lend itself to altering syntax in significant ways (though some of the libraries he mentioned have low-level C implementations that can/do alter Python syntax). If you're doing scientific computing you're going to see some new things that don't entirely look like Python. Things like matrix math and the like have new types and (mildly new) syntax. But, I don't think it's surprising for people who know Python.

I guess there may be some cognitive dissonance that comes from using projects that paper over Fortran and C++ libraries with light Python wrappers (a lot of scientific computing libraries, again...so, not often new syntax, but not very Pythonic, either).

The specific example mentioned of matplotlib (and SciPy and Numeric, etc.), probably comes from the origins of the library...it was intended to make it easy for people coming from Matlab to switch to Python for the same sorts of tasks, so it's conceptually a mashup of the two.

But, maybe the argument should just be "walk before you run", rather than singling out any particular library. Particularly if those libraries might be the sole reason the person is learning Python. Sounds like the person he was helping out with her matplotlib problem really wanted to be working with matplotlib and the scientific computing ecosystem Python offers. You've gotta learn them eventually if that's the kind of work you want to do with Python. They might be a little weird, but they're not a whole new language.


Definitely in the data science stack though. For example: how pandas behaves with slicing, and copy vs view semantics are totally different from vanilla python, and are confusing enough coming from vanilla python. God help you if you learn pandas and python at the same time.


>how pandas behaves with slicing, and copy vs view semantics are totally different from vanilla python, and are confusing enough coming from vanilla python

Can you give a few examples of this?


Sure thing.

For slicing, see Jake VanderPlas's book: https://jakevdp.github.io/PythonDataScienceHandbook/03.02-da...) that describes the complexities -- some kinds of pandas slicing include the last index, some kinds don't.

View vs copy is even worse. There are some situations where indexing on a DataFrame yields a view, some situations where it yields a copy, and they are effectively indeterminate in a lot of situations. From the Pandas documentation https://pandas.pydata.org/pandas-docs/stable/indexing.html#i...:

  See that __getitem__ in there? Outside of simple cases,
   it’s very hard to predict whether it will return a view 
  or a copy (it depends on the memory layout of the 
  array, about which pandas makes no guarantees), and 
  therefore whether the __setitem__ will modify dfmi 
  or a temporary object that gets thrown out immediately
   afterward.
By contrast, the vanilla python rules are much simpler: slicing always leaves off the last index, and the rules for view vs copy of mutable data structures like lists are at least determinate...


Thanks for the reply. I can see how it can be confusing, and more so for that indeterminate part.


It depends a lot on what you mean by "Learning Python". Is it "Learning Python as a second or third programming language", or "learning programming using Python"?

Understanding libraries and idiomatic language constructs is important - but only if you're well past the "Hello World!" stage. If you can't get close to a working fizzbuzz yourself - you're probably not ready for BeautifulSoup or Numpy or Django. You _might_ be able to copy paste Stack Overflow examples, but there's some basic "learning programming" that needs to be done before you'll understand what you're doing...


I disagree in the case of numpy. If you're going to be using python for numerical work almost all of your data will be numpy arrays so there is not reason to not get used to them from the beginning. You don't have to go digging into the obscure corners of numpy, but you should be familiar with basic arrays and how they work.


It's not a given that you'll need to work with numpy arrays. It's quite likely that numpy arrays will be used behind the scenes, but you'll be working in a different abstraction (e.g. pandas dataframes or tensorflow tensors) that will hide pretty much all of numpy stuff from you.


But I'd argue that even in those cases they are easier to understand if you already know numpy arrays as they share a lot of syntax. And if you're ever going to interact with other python numeric libraries you're sooner or later going to be converting your dataframes and tensors to and from numpy arrays anyway.


Ah thanks, I've always looked for a way to sum up this distinction and you've captured it perfectly (particularly "learning programming using x".. I think a lot of books called "Learning X" should be called that instead).

Is there any recommended material for learning Python as a second or third language?


There are a few guides out there in the form of "X for Y programmers", eg. Python for Ruby programmers. Personally, I find it more helpful to use regular materials and skip the stuff I already know.


When I do that, I miss the details that can catch you off guard.

If I had done that with Javascript, I can assure you that function based scoping would have destroyed my brain in trying to debug. But since I read a book on Javascript[1] that assumed familiarity with other languages, I was prepared for it.

[1]: https://www.amazon.com/Professional-JavaScript-Developers-Ni...


Python is one of the languages that are heavily used by people who do not want to do software development, but want to get stuff done. It's patronizing intellectual pedantry to tell those people to actually learn programming when they can just use the few numpy commands required for their immediate work. Sure, give a man a fish etc.; but sometimes, people just want the fish already and don't want to mine the ore required for building a fishing rod from scratch.


I'm not sure if it was just me, but the contrast of the article text was so low that it was pretty difficult to read.


This has become a tendency in many web sites from some time now. I dislike it too, it's a pain in the ass. I sometimes have to select some text with the mouse so that it goes into reverse video so that I can read it easier. (Downvoted HN comments are a good example, but in the case of HN, low contrast is not the norm, except for comment headers.) Not sure what's the exact reason for this trend. Some ideas:

- someone started it off, maybe a so-called high-profile designer, and others blindly copied it. [1] Designer doesn't necessarily mean an individual one. It could be for example Twitter Bootstrap that was copied, since I know that a lot of web sites use it.

[1] Blind copying (without regard to appropriateness) is a common trend in some people since probably the start of mankind. Not sure much can done about it. People have to learn for themselves that they need to think for themselves.

- the monitors of the people creating these low-contrast sites may be very high resolution and/or high brightness, so the low-contrast still is readable to them.

- could be it is an example of what I call (just made up the term) the C++ syndrome, where newbie C++ developers use C++ language features liberally and where not required, or where they may not make sense, just because they can do it, or sometimes just to get to apply it and learn about it (which should not be allowed by any good project managers in production software, just tell them to play in a sandbox they create). Though I called it the C++ syndrome, it is equally visible in other languages and stacks.


In fact I now have started dumping such sites as much as possible - unless they are essential, I stop visiting them.


I found it difficult to read, too. It strained my eyes :)


OP (as in site owner) here, fixed it! I went three shades darker. Let me know if its easier now.


It's better, but I think it would be ideal if the text was completely black instead of a somewhat darker shade of grey.


A lot of people here are disagreeing on the basis that new language learners want to make something cool quickly. I agree, I've certainly taken the path of trying to teach fundamentals to people who very quickly lose interest.

However, while absolute beginners do need a hook, there's a transition phase that isn't filled: it comes between being able to copy paste SO answers together effectively enough to build a functional product, and actually understanding how that application works.

Coming from a JS background, I'd say a close equivalent in the past was frontend devs (gainfully employed as such) who didn't know any javascript; they just knew jQuery. I'm not sure how that's changed today, but I'm guessing the node package ecosystem breeds similar results.

I'm no beginner when it comes to programming, but my exposure to Python has been relatively limited up until recently, and as I'm now quite suddenly running some django installs, and trying to work backwards from learning django initially to properly grokking actual python, these are the kind of dedicated resources I can see being of value.


one of the better advice i read on this topic

is to focus on creating something (in this case) using python, rather on learning python, and then find something to do with what you learned

another personal point for me, was the focus of learning on the logical modeling for a programming and treating the physical modeling as an after thought

most book and tutorial focus on creating functions, classes and module (programming in the small), but spend a lot less time teaching how to group those in file and how to compile, package, distribute and deploy the program (programming in the large)

for those reasons i have to disagree with this article, since it seems to promote learning by focusing on programming in the small


Learning Python via time and datetime is a sure way to hate the language.

I am an average amateur programmer and used Python almost daily for a few years. Just today I needed to convert an epoch timestamp to its ISO counterpart. I tried without using arrow. After 3 min reading the convoluted doc for datetime I failed. arrow.get(timestamp).isoformat() fixed that in a few seconds.

Yes I am weak. Yes, I should learn the library. But I do not want to suffer when there is a great lib helping me. Same goes for requests and a few others.


I wanted to learn python by myself, a few years ago, it was not easy (difficult to justify to use it at work when you know perl enough to do the job in 1/10th of the time). To keep motivation, I have followed https://www.coursera.org/learn/interactive-python-1. IMHO, it is very good for beginners. It avoids the library problem and allows to have funny results.


Python itself is already too much abstraction. Everyone interested in programming should learn the basics all the other stuff is built out of:

Hardware, binary, assembly, C

That doesn't mean to get full on embedded hacking, it means to get some basic grasp of how C maps to the hardware.

Then it doesn't matter how to continue learning, everyone individually has to find out what kind of further abstraction suits best.

Too much programming is taught from the top abstraction layer just to make awful soft more quickly.


I disagree. Teaching works best when your pupils can see what they're making quickly. The more interactive and iterative a process can be, the less bored they get. Many tools call this gamification but you really don't have to attribute scores to things for people to enjoy a chunked process more.

But with C, a complete novice spends most of their time getting beaten up by boilerplate (how do I build and run and debug this?) and syntax issues. You missed off a semicolon. You didn't declare this in the right place. You haven't allocated memory for that. All important things for C developers but to a novice it's all just jibberish error messages and take a long time to find and fix.

By comparison, abstracted languages give you a decent REPL environment which can be ideal for teaching programming basics.

Every layer of abstraction you remove is another layer the novice has to handle. If you just want to get a foot in the door, and keep things fun, why bother with that? They can learn about that stuff tomorrow.

(I'm not saying this stuff isn't important to know, just consider the order that you teach it in.)


Let's reverse this problem: we go back and try to send a manned mission to Moon.

This implies a huge amount of vertical engineering and research. But instead of saying that the government is going to pay for the project upfront - we say that all of these scientists and engineers are going to be paid after the mission had been completed.

How many of them would be motivated enough to go through the whole vertically-planned project themselves; even if the project is itself quite inspiring?

Same with software - I don't think that starting on the highest levels of abstraction is "not to get bored really quickly". It's the case where you can quickly try and see what programming is without spending another 5 years trying to understand every possible intricacy of a modern computer. Call it "time resource risk management".

More importantly, most of the time if you're young but are already employed - you're being told what exactly you should do anyway.


The big question is why is this person trying to learn programming. Is it because they hope to spend the next 20-30 years of their career as a professional programmer or is it because they have some problems in their chosen field of study that they hope that knowing a bit of programming will help to solve.


dagw, true. That's exactly the path I had taken - but I had enough experience as a pure hobby to begin with to know exactly what I was choosing.

For example - I ended up in a deep learning department just a few months before it became a thing in the media. I knew it was beneficial for me due to a huge amount of statistics involved, even if I would later not get a job doing this. But if I did not know it - I would probably have taken another subject as it appeared too theoretical and didn't seem to be in any way truly relevant today (and there were alternatives as theoretical and as good).

How many students had any idea which subjects to pick for a study in such a pragmatic way? I was not aware how lucky I was at the time, but later on I realised that I actually were.

It's great to say something along the lines of "just study for 5 years and it's probably going to be good for you, since education is generally good". But then you start meeting people with Ph.Ds from mediocre universities who had spent a good chunk of their lives on something that landed them a mediocre job and that's where I believe the problem with modern system lays.

Smart dedicated people will probably find they way anyway. What about somebody who isn't an architect but who could become a great builder solely because he's not so interested in the industry on a higher level, but is capable given the circumstances?


I don't think it has to be one or the other. Myself, I often get confused by top-down approaches because I obsessively ask about "what's underneath" and feel disoriented by arbitrary-seeming machinery. Friends of mine on the other hand get confused when bogged down with details that are "unimportant" to them.

It's a really difficult pedagogical task to create a curriculum that's helpful to a large swath of students all at once, and I don't envy teachers for that task.


Yes and no, it's true that there is a lot of cruft in setting up a portable write-compile-debug cycle for C with often not very helpful error messages. On the other hand you could just use an online compiler in a browser. And demonstrate all the other fun stuff on a chalk board. People that really want to go low-level then can figure it out, since they learned the terminology and the basic principles behind.

Most of the 'fun & exciting' is part of the teacher responsibility and how he/she/it teaches, as i would argue that the basics of how our stuff works is by default even more exciting than learning to use an arbitrary api of some one button click framework.

The article points in the right direction, it's just not enough.


Honestly, the decision of a first language doesn't even matter that much. Beginners need to learn how to program, by which I mean understanding and writing sequential logic. This is by far the biggest barrier. Without it, you get "wizard" programmers: Those who copy-paste magical incantations with no understanding of even the basic underlying logical flow.

You can use any language to teach this, but it should absolutely be the primary focus for absolute beginners. And it really doesn't matter what you use, because even in C or C++ you should not be providing skeleton code with all the boilerplate and complete build scripts.


Obviously you didn't have to teach to a lot of young kids. Cause teaching them the basics as you say won't keep them interested long enough.


Was that meant for me? I'm for abstracted teaching. Let them accomplish complicated things with comparatively little theoretical understanding.

Letting them sink their teeth into logic and algorithms is much more fun and engaging than sitting them down and giving them the memory allocation talk before their program crashes out because they sliced off the nul-ending from their char array.


Wrong comment sorry.


It's not what you teach, it's how you teach - you can make any topic for any age exciting and fun as a teacher.


I agree about starting from the basics[0], because programming - as a practical art, not theoretical mathematics - is just about making computers do things. Commanding computers to do things is straightforward - everything else in programming is just managing complexity, which is, in my opinion, best taught as solutions to problems you encounter when simpler methods start to get out of hand.

--

[0] - With a proper, simplified dose of hardware, basic amount of binary, maybe some assembly. Not sure about C itself, but something on that level is the right basic abstraction.


I'm mildly surprised that HN is discussing, debating and defending opinions on pedagogy here.

Naive questions that initially pop into mind:

* In teaching programming, what metrics do we want to look at? Student performance on XYZ test is one, but these metrics have known drawbacks.

* Where can a curious non-teacher go to find high-quality empirical data about the efficacy of various curricula, etc?

* How much do things like class size and demographics affect the kinds of curricula we want to use?


Pedagogy may aswell be a form of art, as any humanity science, weak on the proven side, big on individual impact. If your classes are fun and engaging, i would say it's good teaching. If it's a monotone snoozefest, rather not so, even if your content is super competent on paper. Hard to measure, better to feel individually.

Feynman for example was a good teacher, fun and engaging even on the most difficult topics, it's almost more a theatrical monologue piece and engaging to watch or listen to.

https://www.youtube.com/watch?v=f27bh4CIky4


By not using libraries you will write it yourself, if you are learning your code will be worst than the library.. . And it will take you a lot more times than learn how the library works.

As advice, if you want to do something, always check if someone have done it before. (i think most of hn readers have this habit.. )

If you find big active project like opencv or numpy with good doc and a lot of stackoverflow threads go with it. Thats not python, but it is better than python.

If it is less famous or active look the code.


The author suggests that while learning python one should stick to using the standard library instead of using additional libraries that sometimes come with archaic baggage or domain specific constructs that hinder learning.

I think the author could have done with calling the standard library by name instead of just calling out specific packages. And of course missing are functools and itertools and collections which I find essential in writing concise and pythonic code. Instead they single out json and csv which are special purpose


> I was once tutoring a Master’s candidate at the University of Virginia how to use Python. She was interested in learning Python to interface with the OpenCV API via the cv2 module. I had recommended her many educational resources from complete beginner texts to advanced OpenCV texts.

> After a week of self-teaching, she had not made much progress in learning Python, mainly because she was trying to debug a specific script she found online. The script would read images via OpenCV’s cv2, then use matplotlib to display the images. The script used a strange matplotlib function I had never seen before for displaying the images. So, when she came to me with the error, I had no idea how to fix it. But… I did know how to accomplish the same thing with purely cv2.

> All the while, she had learned nothing about Python itself, because she was busy trying to debug a matplotlib function. Obviously, her time would have been much better spent reading the first few chapters of an introductory Python book. I wouldn’t say that this student was unmotivated, rather that she was working a little too hard to fix an archaic and unnecessary matplotlib function.

TL;DR: Instead of me learning enough about this one matplotlib function to help her, she should learn enough Python to write her own graphing library to use with OpenCV.

I'm surprised he was onboard with her using OpenCV. This is also probably the worst advice I've ever seen offered on Hacker News. Good lord.


I like the list of potential projects to attempt, in order to learn a language. I would add a Battleships program to that list as I find that provides an interesting challenge to learn to program without being too complex in terms of data structures/algorithms.


Learning python without 3rd party libraries is great if you aren't already a programmer. The author's view is likely to change after he works professionally as a programmer for a period of time.

It's good advice for non-programmers just starting out though.


No it's not. It is so much easier to do his CSV example with Pandas than with the standard library components he mentions. There are many, many settings where doing so with Pandas are how the experienced people are doing it. If you are a non-programmer trying to learn enough Python to do your non-programming job, learning how to do it the easy way that everyone that's your peer is using is a better answer than doing it this harder way.


All of the modules he suggests avoiding are the ones that make up the standard data science tech stack. So, IOW, learn Python before doing data science. Your usual beginner Python tuts don't mention Pandas or matplotlib, so this seems like a non-issue.


He points to a graduate student who had to pick up Python for some task and wanted to learn both the language and the tools necessary for that task. I can throw another anecdote into the ring and mention a web data mining course I'm taking right now where a lot of students are having to pick up Python along side both scikit-learn and scrapy.


IMO, the Empire of Code is one of the best, and definitely the most fun, way to learn Python: https://empireofcode.com/


The color of the text in this article makes it really hard to read, light gray on a white background is not a great idea.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: