Hacker News new | past | comments | ask | show | jobs | submit | rm999's comments login

> If they can make the average song listened to by a user just 1 second longer, they reduce that by about 0.5%.

This isn't how music royalties work - rather, Spotify (and most other on demand streaming services) pay out a % of their net revenue to rights holders. This % does not change based on how many streams there are in total, but it IS distributed proportionally based on the number of streams, so it's more profitable for a music rightsholder to have more streams (the topic of the article).

Some high level information: https://support.spotify.com/us/artists/article/royalties/



Can you define “net revenue”? Do you mean gross revenue, because Spotify does not earn profit (aka net income)?


They must be evil communists/socialists.


Is a stream defined as wonderful listen through the song or like maybe halfway through?


Seems like they should measure proportions using seconds streamed vs units streamed?


They both seem primitive. If a user paid $10/month for a subscription, each month they should divvy that $10 to the proportional minutes listened of each artist for that month. That’s paying out to the people that are keeping that person subscribed. Minus Spotify’s cut of course


I'm sure that's where most people think their subscription money goes but the industry made sure it's not.


That’s kind of how SoundCloud does it (they call it user-centric payment model), it has the added benefit of curtailing fraud.


White noise and other background sounds cause issues here.



So Universal Music Group is depending on its competitor, Warner Music Group (owned by Access Industries which also owns Deezer)?

Interesting. I thought Universal/Warner/Sony were propping up Spotify as leverage for when they need to negotiate with Apple/Google/Amazon.


Those are already an issue that Spotify has.

https://www.engadget.com/spotify-almost-removed-white-noise-....


How so, maybe that just means those white noise tracks bring the most value.


Right? As a user, if I listen to 2 hours of content split equally between sources A and B, it seems fair that they each get half of my subscription fee (less Spotify's cut). Regardless of if A views B's content as "less worthy". On the other hand I wouldn't sign up for a monthly white noise service and if A went on strike and didn't renew license agreements, that's what Spotify would become. Record labels do have leverage over white noise which is a commodity (right? y'all aren't beholden to certain streams are you??)


I actually do have certain episodes/streams saved that are my go-tos.

Navigating "rain sounds" has become a lot more difficult lately specifically due to record labels complaining, particularly if you want one continuous 8hr stream. Instead all I can find now are playlists with a bunch of things I don't want. If I didn't have my favorites already saved I wouldn't be able to find them at all now.


There's a lot of confusion in this thread about "before the big bang". I was also confused and did some googling and found this explanation from a professor of theoretical physics. It seems that it's actually pretty normal to refer to the big bang as happening after the initial inflationary epoch, but others refer to it as the moment before this.

>Do not allow yourself to be confused: The Hot Big Bang almost certainly did not begin at the earliest moments of the universe. Some people refer to the Hot Big Bang as “The Big Bang”. Others refer to the Big Bang as including earlier times as well. This issue of terminology is discussed at the end of this article on Inflation [https://profmattstrassler.com/articles-and-posts/relativity-...].

The article is talking about the "hot big bang", so it's using terminology that is accepted by other theoretical physicists.

https://profmattstrassler.com/articles-and-posts/relativity-...


Wouldn’t this be the result of some theoretical physicists moving the goalposts?

It sounds like they feel the commonly accepted understanding of the Big Bang is overbroad. Fine. Find new words to describe the subsets of the event. Redefining the word is just causing confusion.


Most of the stuff I have read on this presupposes that some kind of phase transition(think of the early universe being in a 'boiling' phase and then condensating) occurred that caused the field which drove inflation(with the force carrier called the inflaton) to decay and release all the energy in the field(that is, the inflatons decayed). This decay process is what we conceive of as 'the big bang', as in the start of the energy dense Universe we see a glimpse of in the CMB.

You are right that the goalposts have been moved. When analysis of the CMB began, it was noticed that it was far too uniform in distribution and temperature for what was previously thought to be possible. It was at this point that an inflationary period was tacked on before 'the big bang' because that was the only way to get the kind of 'big bang' we seem to have had.


I learned about this from Red Dead Redemption 2 (spoiler ahead), where the protagonist purchases a pre-built home from a catalog modeled after the Sears Home catalog.

What I found particularly interesting is that what Americans today consider stereotypical American farm houses were actually these Sears houses! They had a significant influence on the country's architectural history.

source: https://www.reddit.com/r/AskHistorians/comments/bgxt8q/in_re...


In boardwalk empire too one of the characters lives in one.


While my experience is not from the 90s, I think I can speak to some of why this is. For some context, I first got into neural networks in the early 2000s during my undergrad research, and my first job (mid 2000s) was at an early pioneer that developed their V1 neural network models in the 90s (there is a good chance models I evolved from those V1 models influenced decisions that impacted you, however small).

* First off, there was no major issue with computation. Adding more units or more layers isn't that much more expensive. Vanishing gradients and poor regulation were a challenge and meant that increasing network size rarely improved performance empirically. This was a well known challenge up until the mid/later 2000s.

* There was a major 'AI winter' going on in the 90s after neural networks failed to live up to their hype in the 80s. Computer vision and NLP researchers - fields that have most famously recently been benefiting from huge neural networks - largely abandoned neural networks in the 90s. My undergrad PI at a computer vision lab told me in no uncertain terms he had no interest in neural networks, but was happy to support my interest in them. My grad school advisors had similar takes.

* A lot of the problems that did benefit from neural networks in the 90s/early 2000s just needed a non-linear model, but did not need huge neural networks to do well. You can very roughly consider the first layer of a 2-layer neural network to be a series of classifiers, each tackling a different aspect of the problem (e.g. the first neuron of a spam model may activate if you have never received an email from the sender, the second if the sender is tagged as spam a lot, etc). These kinds of problems didn't need deep, large networks, and 10-50 neuron 2-layer networks were often more than enough to fully capture the complexity of the problem. Nowadays many practitioners would throw a GBM at problems like that and can get away with O(100) shallow trees, which isn't very different from what the small neural networks were doing back then.

Combined, what this means from a rough perspective, is that the researchers who really could have used larger neural networks abandoned them, and almost everyone else was fine with the small networks that were readily available. The recent surge in AI is being fueled by smarter approaches and more computation, but arguably much more importantly from a ton more data that the internet made available. That last point is the real story IMO.


The funny thing is that the authors of the paper he linked actually answer his question in the first paragraph, when they say that the input dataset needs to be significantly larger than the number of weights to achieve good generalisation, but there is usually not enough data available.


Their business plan is to build custom models for clients/companies who want to own models built on their own data. No comment on the viability of such a business model.

https://twitter.com/EMostaque/status/1649152422634221593


Python did not win the ML language wars because of anything to do with front-end, but rather because it does both scripting and software engineering well enough. ML usually requires an exploration/research (scripting) stage and a production (software engineering) stage, and Python combines these seamlessly better than many ML languages before it (Matlab, Java, R). Notebooks became the de facto frontend of ML Python development and to me it's evidence that frontend in ML is inherently messy.

Do I wish a better language like Julia had won out? Sure, but it came out 10+ years into this modern age of ML, which is an eternity in computing. By the time it really gained traction it was too late.


> because it does both scripting and software engineering well enough

It certainly does scripting decently, but for software engineering it's hell.


I agree, but can you imagine that Matlab, and then R, were the de facto ML languages before Python really took off? Putting R models into production was an absolute nightmare. Before R, I was writing bash scripts which called Perl scripts that loaded data and called C code that loaded and ran models that were custom built by Matlab and C. Python (and the resulting software ML ecosystem) was a huge breath of fresh air.


I agree with you. I also find Python to be slower for iterations and refactoring.

I ranted on about it recently - https://avi.im/blag/2023/refactoring-python/


Create view is a great V0.5 for a data warehouse and what I recommend people do if possible so they can concentrate on building the right schema, naming standards, etc.

dbt is the V1. You get a lot of tooling, including a proper dag, logging, parametrization. You also get the ability to easily materialize your tables in a convenient format, which is important if (probably when) you figure out consistency is important. Views can take you far, but most orgs will eventually need more, and dbt is designed to be exactly that.

As a side note, moving from views to dbt is actually quite easy. I've done it several times and it's usually taken a couple of developer days to get started and maybe a couple weeks to fully transition.


It's gotten much easier in the 24 hours because of this binary release of a popular stable diffusion setup+UI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/rele...

(you still need a Nvidia GPU)

Extract the zip file and run the batch file. Find the cptk (checkpoint) file for a model you want. You can find openjourney here: https://huggingface.co/openjourney/openjourney/tree/main. Add it to the model directory.

Then you just need to go to a web browser and you can use the AUTOMATIC1111 webui. More information here: https://github.com/AUTOMATIC1111/stable-diffusion-webui


oh this is so great - thanks!


If I am reading this correctly, a pawn past row 7 will automatically be replaced by a Queen:

https://github.com/ehulinsky/AnalogChess/blob/main/analogche...


The y-axis has actual meaning: it tells you the purchasing power of a public stock, which will never be 0.

What you could do is normalize it by the value of the stock at the start of the chart, which would make the charts start at 1. On a log plot this is the equivalent of dividing all the values by the starting value, which moves the lines up/down but does not change their shape. This could make it easier to compare the lines, but in doing so, you throw out information (the real value of the y axis).


> The y-axis has actual meaning: it tells you the purchasing power of a public stock, which will never be 0.

If we are trying to visualize the total return of a given asset since 1987, how is the price of a single stock (an arbitrary unit) in 1987 or any time since, relevant data?

The ROI expressed as a percentage on the y axis (with 0% at the beginning of the period) would be a much better visualization of the relative returns among asset classes throughout the period.

Currently I get USD < VBMXFX < VFINX at the beginning of the chart, for reasons that have nothing to do with the total return since 1987.

The drawdown chart is even more confusing with the USD starting at -81%. If we were plotting a chart of drawdown of multiple currencies, the older ones would start lower (since they've had more time to be affected by inflation) which only makes it hard to visualize the answer to the question "how did they do since 1987".


As another commenter points out, it completely depends on what feature of the data you're trying to highlight, or what question you're trying to answer.

I don't think anyone looking at this chart really cares about the inflation-adjusted value of one share in a specific year, I think the main point of this chart is the real returns of stocks, bonds and cash (or an approximation of such represented by the selected indices), over time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: