Hacker News new | past | comments | ask | show | jobs | submit | voidfiles's comments login

RSS?


Markdown is the new Cursive


This seems like a great idea. If it can help even one developer it's worth it.


How would it help even one developer?

Or asked another way, what problem does this solve for you?


So, my personal blog doesn't get a ton of traffic, but the one article that gets the most traffic is an article about how to monkeypatch feedparser to not strip about embedded videos.

While not hard evidence, I think it's indicative of the kind of experience a developer has when they choose to engage with syndication.


I added Kim to my ongoing set of python serialization framework benchmarks here is how it ranks.

  Library                  Many Objects    One Object
  ---------------------  --------------  ------------
  Custom                      0.0187769    0.00682402
  Strainer                    0.0603201    0.0337129
  serpy                       0.073787     0.038656
  Lollipop                    0.47821      0.231566
  Marshmallow                 1.14844      0.598486
  Django REST Framework       1.94096      1.3277
  kim                         2.28477      1.15237
Comments on how to improve the benchmark are appreciated.

source: https://voidfiles.github.io/python-serialization-benchmark/


This is brilliant, exactly what I was looking for. I did a profile recently on some API calls and found that 40-50% was being spent on serialization with marshmallow, which I'm looking to drop.

I'll be doing this stuff for myself, but would you be curious in having:

a) Support for lima: https://lima.readthedocs.io/en/latest/

b) more benchmark cases (serializing a larger list of objects)


Just a minor note: It seems you don't mention anywhere what those numbers actually mean. I'm assuming they are seconds, but I can't know for certain, which makes it really unclear if Kim is the fastest or the slowest.


Thanks so much for this Voidfiles. We were under no illusions that we weren't the most performant library out there (yet)

This is a great start for us understanding where we need to get to! We've got some work to do :)


Human misery isn't something most companies want to be associated with. For that reason, many companies don't want their ads to run next to content like that. I've personally had to quickly pull ads off a site because of that fact.


I've been occasionally fascinated that newspapers don't have a flag in their CMSs for articles that are crass to plaster with ads.


The New York Times seems to do this:

http://www.niemanlab.org/2015/03/an-ad-blocker-for-tragedies...

If a story is marked as sensitive, an option is set and ads aren't shown on it.

The Guardian has this too. And I've done this on a few sites I've worked with as well.


We developed that flag after the incident I referred too.


We spend far more time consuming content generated by our friends and family then we do consuming content created by media companies. I don't think he could conceive that a company like Facebook could control the means of distribution by providing something that people want more the professional media.


"We spend far more time consuming content generated by our friends and family then we do consuming content created by media companies."

I think this is generally true, but less true today than it may have been a few years ago.

What percentage of interactions on social networks involve sharing or referencing preexisting content of some sort? Movie trailers, articles, songs, Buzzfeed lists, surveys, memes, and so forth? Many of these forms of content have been created by one type of media company or another.


What problems did you run into while using RequireJS?


I write with Node in the backend and JS on the front-end, so switching between CommonJS and RequireJS styles was a source of constant irritation (and let's face it, the RequireJS style is annoying). I tried using the CommonJS style in RequireJS but I could never get it to behave correctly.

Also, though I found bower useful I disliked having yet another package manager, yet another manifest and yet another install step included in my development workflow. Using npm for both server and client side modules has been a dream.


With Browserify you have extra steps as well. Install browserify. Install a watch thingy (in grunt or gulp or write a shell script or whatever). Remember to run that watch thingy every time you develop.

That's why I prefer client-side loaders. All you have to do is copy the loader into a folder and create a .html file. I have my dev folder hosted by apache so I don't ever have to worry about starting some process, running npm install and watching it install the entire internet, etc.


When people complain about these types of "steps", it makes me wonder if they're not thinking clearly about what these steps actually mean, especially for larger projects or when working on a team where synchronization is absolutely key. These processes are here to help you help yourself.

Steps like these allow you to systematically control every aspect of the development process, and adapt. In the long run, these steps work in your favor.

They also allow you to hook into the more modern aspects of development. I mean, you don't want to type `npm install` --that's fine; what you're essentially saying is that this incredibly useful ecosystem is irrelevant to your needs. I find that very hard to believe.


You misunderstand me. The benefit of a client-side loader is that these things can be added gradually. That doesn't mean you don't use many of the same tools. There's just not that 15 minutes of setting up your project folder, and of course not having to run a watch task every time you develop.


Remember to run that watch thingy every time you develop.

Well, that's the beauty of it, for me. I run everything through gulp, so I don't need to remember to run the watch thingy - I type:

    gulp watch
which starts up my dev web server, complete with LiveReload shims enabled. Then when I'm ready to build I do

    gulp dist
and it packages everything up.


I'm confused, you say you don't have to remember and then you say you typed gulp watch. Is the fact that you typed something an indication that you remembered to do so?

I understand that some people like this workflow, what I don't like about it is when I start working on a project I don't want to spend 15 minutes setting up build scripts, downloading the entirety of npm, etc. I just want to start hacking in the browser. Client-side loaders make that really easy, just drop in the .js file and go.

Again, I know some people like that workflow and I'm not saying you're wrong, just pointing out that there is a tradeoff, it's not a clear win by any stretch.


I see that you are trying to stick to your guns, but it sounds like you never even gave this a try. It's so easy to do, and if you write Sass, LESS, or any other css preprocessor languages, then you're already doing this somehow (and if you don't you're working on tiny projects).

Client side loaders (et al AMD based RequireJS) are dumb. I've been there, done that. I'll minify my js code and then "just drop the .js file" into my page, and let gulp watch for changes.


Client-side loaders support plugins too. Less, Sass, CoffeeScript, ES6, literally anything you want. I don't know why you would accuse me of not giving Browserify a try (I have several projects that use it, but I guess that's not giving it a try) when you didn't even know that client-side loaders support plugins.

Hell, you could even use a client-side loader exactly as you are doing with Browserify/Webpack. Just set up a watch task that builds unminified and there you go! It's just nice that they aren't dependent on a cli process. But unlike you, I'm not trying to convert anyone here, just pointing out that there are advantages to using client-side loaders. For example, it promotes separation of client-side code from server-side code. Using the same package.json file for both promotes bad practices like developing with the server running and writing code that's not easily testable without the server running.


I'm confused, you say you don't have to remember and then you say you typed gulp watch.

Well you have to type something in order to start your project unless you like using file:/// URLs. "gulp watch" doesn't just run the watch task - it runs the entire dev environment with watch. How do you run multiple projects with Apache?

And while setting up these modules is a process, I only really have to do it once - I have a template directory I just copy into a new project. Then everything in the src/ directory gets processed accordingly once I type 'gulp watch'.


> Well you have to type something in order to start your project unless you like using file:/// URLs. "gulp watch" doesn't just run the watch task - it runs the entire dev environment with watch. How do you run multiple projects with Apache?

Apache serves everything under ~/dev and that's where I stick new client-side projects.


Do you have a build process? I still want to concatenate and minify for production, so I'm using a build task anyway. At that point having everything run in one place is no bad thing. Especially for whoever comes after me.


That really depends. If I'm writing a small module that others will use, there's no reason to build that. Or I might just be experimenting with some new browser API.

It's nice to be able to just start coding without the friction of setting up every project as though it were some large thing that would include production builds, automated tests, and many other developers. Of course you can work on those types of projects with client-side loaders just as easily.

I recommend trying out jspm: http://jspm.io/ It is all about removing the friction that people often have with client-side loaders (I have to maintain another config file, the horror! ;) but is also forward-compatible as it implements the upcoming ES6 module loading stuff. You can use CommonJS, AMD, or ES6, and mix and match the three.


Your process also has "extra steps", like installing and configuring apache, installing a loader, and creating an html file for it. There isn't anything that's just free, and different people just understand and prefer different workflows. For instance, I don't use browserify, but its workflow makes a lot of sense and "sounds right" to me, whereas yours sounds really strange.


Installing Apache/Nginx is a 1 time deal. A browserify/webpack workflow requires that you set all of that up for each project, and to start your watch task every time you develop. So if you're working a few different projects you either have to switch between them or start new watch tasks for each. I understand that some people enjoy this workflow and am not saying they are "wrong", just pointing out the added requirements.


Hmmm... I'm sure most people are like me and have their own scaffold set up, and don't rebuild everything from scratch. For every project that I write, I do...

1. git clone https://github.com/WINTR/grunt-frontend-scaffold

2. npm install; and then

3. grunt dev

...which watches everything on localhost:3000, including my unit tests and source code, and reloads on change. Dead simple, fast and consistent stuff and everything I could possibly need set up in less than 1 minute, every time.

Most teams, I imagine, work in a similar way.

Not to mention, if I wanted to pass over the project to another developer, I would just have to tell him to clone the repo and hit `npm install`.


Maybe this isn't such a problem, but having a system-wide httpd means that all projects have to be on the same version.


Use the browserify-middleware package, like so:

    app.use('/js', browserify('./app/js', {
      transform: ['reactify', 'envify'],
      extensions: ['.jsx'],
      cache: app.get('env') !== 'development',
      minify: app.get('env') !== 'development',
      gzip: app.get('env') !== 'development',
      debug: app.get('env') === 'development',
      precompile: ['./app/app.js']
    }));
The server will dynamically bundle the JS for you; all you need is to run the server. In our apps, we do this for development, but in the production environment we do the packaging at deploy time (via Google Closure to minify) to create static files that can be served by Nginx.


Not to mention that the require('path/to/file') "sugar syntax" is hardly mentioned anywhere and you end up having to track your imports 1 to 1, in order.


Webdis has more features right now. It has authentication, and it supports things like websockets. It's also written in C.

Lark is written in python and I would argue that Lark does a better job of meeting the expectations of what an API should be like supporting POSTs, and DELETEs.

It would also fit in well with an existing flask project. It has a blueprint that you can mount.

oAuth integration is planned, I am working on it right now. I also plan on making websockets work in the same manner as flask-sockets.


Content owners don't make feed readers. If they didn't want people to pre-screen content they could just get rid of their RSS feeds altogether. They haven't done that. They clearly find some value to users being able to find their content through RSS, so I don't think this is why feed readers have failed to breakout.


If Google cannot monetize Reader in its current form and Google effectively loses money because those same pages have Google ads on them that aren't being viewed; then Google has no incentive to keep running Google Reader.


It's not just your tools though. They are asking you to throw away many of your ideas about web development as well. They ask you to adopt a new paradigm that doesn't fit in with the template, innerHTML style of development.

If you embrace that idea, and go with the flow, you can do some amazing things amazingly fast with angular that doesn't work so well in other frameworks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: