Most languages I've worked with I neither love nor dread. Clicked through to the SO survey results and even skimmed the methodology section, but I couldn't find what question they asked to get these numbers.
I had a similar thought as I was reading it. I remember a previous version of the survey, or of a similar survey, that was conducted differently. In that version people could indicate, in effect, that they neither loved or dreaded it, or both loved and dreaded it.
I remember in the survey reports it was discussed because there were some languages that were on both the "most loved" list and "most dreaded" lists. I want to say C and R (and maybe Haskell??) were examples of that but it's been awhile.
I'm not sure why they made them mutually exclusive, as it muddies things a bit.
The survey question is included under the chart, and implies that users were only given a checkbox as input. "Loved" and "Dreaded" seem to mean, of people who currently use the language, which percentage would like to continue doing so and which would not.
A "no strong opinion" option may have provided some interesting nuance.
The question they asekd is at the bottom of the table. Most loved is definited as (people who want to keep using the language) / (people who have used the language in the past year). Dreaded is 1-loved.
Whoa. For years I've been seeing YouTube videos that seem to just teleport choppily around what I assumed were silences. I always assumed there was a standard tool that everyone used to do this. I can do the equivalent thing to a podcast episode in like 5 seconds in Logic. The notion that people have been doing this by hand is staggering, but kudos to you for finally coming along and filling this niche.
It's not automatic, but there is this marker tool that uses an audio track to make note of important timestamps while you are recording: https://github.com/evankale/Blipper
The author also wrote supporting scripts for Vegas to extract scenes based on the position of the blips in the audio.
1. Highlight the clip and "strip silence" to split it into a bunch of separate clips the leave out the silent bits
2. Highlight those clips and "shift left within selection" (I might not have that command name exactly right) to collapse them against each other
thanks! yeah, we have several video editors on our team and i edit a lot of videos - it's just how it is haha. there are tools that help with this problem, but they tend to be plugins or one-off tools, but we're happy that we can go end to end in one spot all in the browser
"HiQ only takes information from public LinkedIn profiles. By definition, any member of the public has the right to access this information. Most importantly, the appeals court also upheld a lower court ruling that prohibits LinkedIn from interfering with hiQ’s web scraping of its site."
Surely I'm not reading this correctly. This would seem to suggest that websites are not legally allowed to prevent bots from crawling their sites. Lots of sites have ToS preventing such things, are those legally void now? Are captchas on public pages illegal, even if you request the page 8000 times in a second?
"In this case, hiQ argued that LinkedIn’s technical measures to block web scraping interfere with hiQ’s contracts with its own customers who rely on this data. In legal jargon, this is called” malicious interference with a contract”, which is prohibited by American law."
This is almost weirder. If LinkedIn wanted to force users to sign in to view profile info, would they be not allowed to do that because some company had signed a contract that implicitly assumed access to that data? If someone writes a web scraper for my site, and I unknowingly change my site in a way that breaks that scraper, can a court force me to revert the change?
Seems to imply that every business is somehow beholden to every contract signed by anyone.
> Lots of sites have ToS preventing such things, are those legally void now? Are captchas on public pages illegal, even if you request the page 8000 times in a second?
ToS are subservient to the law; you can (probably) terminate a service account from a user that breaks your ToS, but if the user does not have a service account (as is the case for HiQ, it doesn't seem they were using accounts for it), then your ToS does not apply, since you've technically not entered a binding legal contract with them.
> This is almost weirder. If LinkedIn wanted to force users to sign in to view profile info, would they be not allowed to do that because some company had signed a contract that implicitly assumed access to that data? If someone writes a web scraper for my site, and I unknowingly change my site in a way that breaks that scraper, can a court force me to revert the change?
IANAL, but I believe that'd fall on intent, and intent is often difficult to prove at a personal level, but not necessarily at a company level. If your intent for putting up barriers that happen to impact scraping, whatever they may be, was indeed to knowingly prevent scraping from a particular company, then you may be liable under this decision. This is the only part of the decision I'm torn on, since it's a bit messy to really prove such things. I'd be much more comfortable with allowing companies to take whatever measures they feel necessary to prevent scraping, and also allowing scrapers to legally circumvent those measures without threat of prosecution, assuming they didn't actually hack into anything.
> but if the user does not have a service account (as is the case for HiQ, it doesn't seem they were using accounts for it), then your ToS does not apply, since you've technically not entered a binding legal contract with them.
Are you sure about this? I am not a lawyer, but I believe that the Terms of Service applies to all users, not just those that explicitly set up a user account.
I have interpreted the LinkedIn ruling to mean that scraping public data is no longer criminal activity but it still leaves you open to civil lawsuits for violating the ToS of the website you are scraping.
> Are you sure about this? I am not a lawyer, but I believe that the Terms of Service applies to all users, not just those that explicitly set up a user account.
How would that even work? If I browse to any random public page of your website, it's served to me before you've even transmitted the terms of service. How could I be bound by those terms of service when I haven't even seen them?
As an engineer, I agree with what you are saying, but I think normal people and the courts disagree.
I think these sorts of contracts are called Adhesion Contracts (https://www.investopedia.com/terms/a/adhesion-contract.asp) and we interact with them all the time. For example, if you valet your car, the valet will hand you a piece of paper with a number printed on it to retrieve your car. On that paper you will find an adhesion contract that is valid and real (although not as powerful as the types of contracts that you sign)
This does not work at least for software licensing based on precedents for shrink-wrap contracts, so again would not work for licensing use of data.
A paper served you by the valet is not an immediate contract as you can deny agreeing to it and service does not happen.
You cannot do that with a publicly visible website, unless you show ToS and require agreement before first use.
If you allow a non-transferable license then said data cannot be used by a search engine. If it's transferable you just pushed the problem towards scraping a different bot.
(Well, you could have a direct agreement with a few major search engines.)
IANAL, but it seems like ToS could still govern your use of the data which you viewed. Sure, it seems like you couldn't claim any violation based on visiting a random page. But if the ToS is clearly identified on the page and you do something with the data that violates them, perhaps the owner of the site has a case.
Except it sounds like the owner doesn't. If the information is on the page made public, the owner of the page can't place terms on what is done with the data downstream. They'd have to implement some real binding system such as authentication where CFAA would apply. (IANAL)
Correct, but all of that is void if the data presented is any sort of protected information (copyright, IP, etc.). You can't, for example, scrape Yahoo Finance for pricing and dividend history and republish on your own stock tools website. They have a license to redistribute that data and publish on their own website. Similar story for copyrighted text and things of that nature.
That would require at least showing that ToS on first use. A link on a page is insufficient.
And said ToS would have to force copyright reassignment rather than a general licence, making LinkedIn culpable for any unlawful content published by users of its site.
I am a lawyer, and there isn't really an easy answer to these questions.
TOS are a lot like EULAs. If they look like contracts of adhesion, then they're going to get more scrutiny and skepticism. The TOS that you claim applies even to every single random visitor to your site where they do not in fact affirmatively agree to the terms is potentially going to look more like a contract of adhesion. That's a lot harder to enforce.
If they are used more for CYA so that you can ban undesirable accounts from your website which people explicitly agreed to when they signed up for it, or so that you can just up and alter your entire business model without having to give all of your customers refunds, then they're easier to defend.
Just my general opinion, of course. Every jurisdiction is different.
Also not a lawyer, but you cannot force me to accept your terms of service. Contract law requires both parties agree to enter it.
When you create an account, etc., you are agreeing to those terms. If I browse a public webpage that just has a terms of service link on the bottom of it, I've not agreed to anything.
> Are you sure about this? I am not a lawyer, but I believe that the Terms of Service applies to all users, not just those that explicitly set up a user account.
Typically you'll see TOS say something along the lines of "by continuing to access this site you agree..." or "if you do not agree with these terms you may not access this site..."
Whether that's enough to create a binding contract depends on the jurisdiction and who you ask.
It can also depend on the terms themselves. I can put "by using this site you agree to bake me a chocolate cake" on my website all day, but that doesn't mean I will be able to force you to bake me a chocolate cake.
From the article, the LinkedIn decision was that scraping data does not violate the Computer Fraud and Abuse Act. Violating that act was considered to be criminal activity. (https://en.wikipedia.org/wiki/Computer_Fraud_and_Abuse_Act)
But the claim of a violation was only a claim as part of a civil trial. The law has both civil and criminal elements to it, and this is about the tort part of the law.
LinkedIn made threats accusing hiq of criminal behavior, but that doesn't mean there's any criminal precedent being set here, as far as I can tell. And no one was criminally charged.
Separately, part of the ruling states that for the purposes of authorization, defying a cease and desist letter does not constitute illegal access, which might have some criminal implications. They imply some sort of technical authorization system must be bypassed, which didn't happen, since the data is "public."
(Which doesn't square well, imho, with existing meatspace law. If a public serving business banned someone from their store, the door being unlocked isn't an excuse to ignore that ban and trespass. But I digress.)
With the overlapping areas of law, it's admittedly beyond my understanding. But the law is generally viewed, like dmca, as being overreaching, if not at least partly unconstitutional.
The CFAA is overreaching, and used often as a catch all. 'Reply All' has a good episode which explores this. This is actually what was used against Aaron Swartz when he was charged for downloading academic journals from MIT, and why his charges were unjustly severe.
There's a long, long history (probably hundreds –if not thousands– of years old) of selling aggregated or processed publicly-available information.
I'm not particularly thrilled with it, but enough people think of it as a valuable enough service to pay for; even if they know they could get it themselves, for free.
LinkedIn users (as opposed to the company) might actually like what HiQ is doing, as it may help their own prospects.
It is true in the current situation, though I would prefer that we ensure free data must be free. In that case buyers of data would be incentivized to pressure providers of free data to improve the data quality.
The data does remain free, as long as LinkedIn still provides it for free.
The data without the noise is what you're paying for. The service of winnowing out what you care about from what you don't care about.
Considering how big of an effort it is, and that the source from which it came is still available, why should the cleaned data be free? If I collect fallen trees from public land, chop it into usable firewood, should my bundles of firewood also be free? Or I collect solar power with my own solar cells, should I have to give you the electricity for free?
I think this is especially relevant when it comes to things that fall under disclosure & transparency requirements - a lot of information that is legally required to be made available isn't legally required to be convenient. So, as a patient, you may have the absolute right[1] to a free copy of the charge master[2] of a hospital you're admitted to but it could be required that you pick it up in person or that it is only supplied in microfliche form... so a company that's aggregated this and is reselling it can deliver real value.
1. This specific example is BS but plausible - I just wanted something more specific than the vagaries around things like FOIAs or shareholder reports which both have specific facts that can be rendered useless unless you have the context.
I'm thinking of processed GIS data. If you have ever tried using the various formats that are supplied by government sites, you know what a huge pain it is.
I'm happy to pay a reasonable price for an interpreted and bowdlerized version.
I actually have! I had to import a huge file of all of the culverts around storm drains in a state, and each culvert was multiple pieces of geometry, none of them grouped together in any logical way. It was just a huge list of rectangles that looked like culverts when viewed visually but no way to identify them as being one culvert without heuristics on how close each rectangle was to others. Massively long process that should not have been so.
The data is free, but the aggregated formatted data has been worked on and processed, are you saying the resulting aggregated data should also be free? That isn't going to happen, why would anyone do that work for free?
Or are you afraid linkedIn and others will make everything private? That's completely up to linkedIn or individual linkedIn users what they want to make private vs public. Maybe more data would be made private if they don't want it scraped. I don't think that's inherently a good or bad thing.
I'm trying to puzzle out how this works in practice. So if LinkedIn has truly public data (no login required to view) then it can be scraped no problem.
But if it's only accessible with a login, then it falls under TOS and they can be blocked?
> Surely I'm not reading this correctly. This would seem to suggest that websites are not legally allowed to prevent bots from crawling their sites. Lots of sites have ToS preventing such things, are those legally void now? Are captchas on public pages illegal, even if you request the page 8000 times in a second?
This is just a preliminary injunction. This wasn't an actual ruling on the case. This just says that until there is a ruling they can't stop the scraping to make sure the company isn't put under while waiting for an actual ruling.
You don’t understand what a preliminary injunction is then.
It’s a very, very strong indication that they will win. Courts don’t issue preliminary injunctions unless it’s extremely likely the side who won the preliminary injunction will win.
Huh, I thought in USA they also did them to avoid an injunction having the effect of making the judgement irrelevant. So, where the case is not clear cut the injunction could prevent one party acting to 'kill' the other (and so avoid judgement) in the meantime?
Could you cite something on this that indicates this (my understanding here) is wrong?
It only requires a “substantial” likelihood that side will win (not an “extreme” one), which basically means there’s a substantive dispute. The more difficult criterion is a substantial likelihood that irreparable harm will occur if the injunction isn’t granted (irreparable harm is supposed to be a pretty extreme thing — it means you can’t fix it with any amount of money).
> This is almost weirder. If LinkedIn wanted to force users to sign in to view profile info, would they be not allowed to do that because some company had signed a contract that implicitly assumed access to that data? If someone writes a web scraper for my site, and I unknowingly change my site in a way that breaks that scraper, can a court force me to revert the change?
LinkedIn has long wanted to have their cake and eat it too - they advertised that data as being publicly accessible and allow google to index specific user pages but then attempt to restrict other bots from crawling it.
If you have private data behind a login there isn't an issue here - if you have public data but want some people to login before viewing it (or not be able to view it) then that's where this ruling comes up. So, this mostly hits sneaky SEO folks and dark UX patterns that rely on tempting someone with accessible data and then pulling the rug out from them at the last minute.
If your website places data outside of authentication then everyone should be able to see that data... I'm curious to see the specifics around
> Surely I'm not reading this correctly. This would seem to suggest that websites are not legally allowed to prevent bots from crawling their sites. Lots of sites have ToS preventing such things, are those legally void now? Are captchas on public pages illegal, even if you request the page 8000 times in a second?
though - DoS attacks are clearly illegal, but with this precedent there's going to be a lot of back and forth to see where the line between DoS and scraping falls... and I think that makes this precedent a lot weaker than the headline would have you believe. A company can still threaten to drag you through a lot of litigation by accusing you of malicious page requests, it'll take a few cases to define where that line needs to fall.
This reminds me about Twitter, when I click to see a thread for a tweet it asks me to login, but if I open the link in a new tab it loads the thread just fine.
Linkedin want their data to be scraped by bots so they have to keep it public, otherwise you wouldn't find peoples profile from Google. They just don't want bots from from their competitors like hiQ to scrape it.
To me, this is crucial. If it's public and available for google, it's public and available for everyone. If you want content to be private, then make it private and accept that you won't get search engine traffic. Otherwise, don't be surprised when your publicly accessible content is accessed by gasp the public.
In other words, the judges said that LinkedIn couldn't use the US legal system to force HiQ to stop. Judges didn't say that LinkedIn was barred from using technical measures.
The court did allow a preliminary injunction against LinkedIn, due to the possibility of "monopolies" (to be determined in Court later), pending resolution of that latter question.
LinkedIn might still win their claim to their right to block scrapers via technical means.
LinkedIn can't prevent HiQ from attempting to scrape their site through force of law.
LinkedIn can rate limit requests, make their site hard to scrape, change their format, whatever. LinkedIn is in no way responsible for how HiQ fulfills its contract to its customers. HiQ is attempting to say that if I sign a contract to provide you with a Tesla, then it would be illegal for Tesla to stop me from just taking one from them to give to you. If that sounds stupid, that's because it is.
The court document says "... refrain from putting in place any legal or technical measures with the effect of blocking hiQ's access to public profiles." on page 11. I wonder if they mean targeted measures specifically blocking hiQ but allowing others such as Google.
> hiQ also asked the court to prohibit LinkedIn from blocking its access to public profiles while the court considered the merits of its request. hiQ won a preliminary injunction against LinkedIn in district court, and LinkedIn appealed.
Whether LinkedIn is the good guy or bad guy here doesn't matter when the decision creates precedence for the rest of us.
Surely a healthier precedent is that we can respond arbitrarily to requests and have no obligation to the requester. So what if I want to randomize the html structure on every request or block requests from Tor because 100% of them are abuse? Can someone take me to court on the grounds that either is effectively "blocking" their scraping syndicate? Why not?
I feel like once CFAA is off the table (which I do agree with), the cat and mouse game is a fair middle ground. Keep web scraping a sport!
There is a large banner next to the highway that shows some weather information that if properly organized (lets say monthly almanac) you would find people to pay money for it. The banner owner does not make money this way - he ask you to go to his website and signup for an account. But you drive the highway (internet) every day, look at the banner, write down the weather updates, and then offer them on your website as a sale. The owner gets angry and sue you. The court decides you are free to drive by the highway and you free to put your eyeballs on their weather banner, especially given the banner is available to everyone (LinkedIn profiles are avail to view without needing an account) and you are free to use the information you obtained for free without interference with said banner in a form of a monthly almanac that you sell. At the end of the day, the banner owner does not own the weather information that someone else put in there (for example a weather meteorologist).
I think personally its a healthy decision. Otherwise it would be similar to prejudice of who should be allowed to enter and browse a street store that by law is available to everyone.
This would mostly mean that you cannot start interfering with webscraping you previously allowed merely because you learned that they're making money with the scraped data.
It seems absurd if the 'interference' only directly affects their own property. Like, if my neighbors start monetizing livestreaming my backyard, suddenly I can't put up a fence? Except worse because in actuality, this third-party contract is costing them money through server load and bandwidth.
Your analogy doesn't hold. Your backyard is private property. The data that LinkedIn publishes is intended for the public. That's why Google can index the pages and give you results from LinkedIn.
It does, in the US. You're likely making an inconsistent comparison.
Property ownership has nothing to do with visual access.
You cannot legally be barred from casually (involuntarily) perceiving something. It's reasonable to put up physical barriers to reduce what is casually perceived. It's a very good analogy.
However it doesn't hold - as your neighbor I can't bar you from putting up a fence because it'll intrude on my view of your property... granted people try to do that _all the time_ but I think it's commonly understood that putting up a fence for privacy is allowed.
It's also not a great analogy for this case because another party is given continued easy access to view my backyard while the first party is denied - and the analogy breaks down here because, as a neighbor, I have no inherent right to view your private life at least as much as any of your other neighbors.
It's trivial to fix that - the exterior of GP's house then. That's available for public viewing; is intended for it, but is private property. If you monetise livestreaming it and describe it in your ToS, GP can't repaint the front door, or get new windows?
Or perhaps slightly less contrived:
If I publish a monthly lowlights reel of my favourite sports team as a podcast discussion on where they can improve in all their lost games, and then they suddenly go on a winning streak for >1month so my USP is gone and I have nothing to talk about..?
Those examples don't fit because they are contracts not made in good faith. They aren't things you can control.
In this case, it was rules that the public data is available. It was a good faith contract on the part of HiQ to assume they could collect public data from a public website.
It would not be a good faith contract to assume you could control the paint colors on a property you don't own.
It seems to me that the interference ruling was wholly independent on deciding that what hiq was doing is legal.
Does that mean that ia grocery store offers free samples, I can go in every day and take all the samples, and the grocery is not allowed to selectively prevent me access?
It means that if they're offering free samples and refuse to offer you the same service they're offering to other customers they might be in hot water - which is consistent with what a lot of folks consider ethical. Offering an item for free to some folks and not to others is a form of discrimination - it's usually not a particularly troubling form of discrimination but in this case Google is allowed to walk up and take all the samples and the grocery store manager just smiles and nods - but when you (hiQ in this example) try and get one you're hit with an injunction and barred from entry.
I mean, anyone can be sued for anything. I can file a lawsuit with basically zero legitimacy to it. It'll probably get thrown out, but you were still sued.
If the question is could someone win, potentially. The argument would basically have to be that the removal of that open source project is akin to other cases of negligent interference.
If this is a specific concern, consult a lawyer - 'cause I'm not one.
Exactly your backyard is of course yours. But you are not at liberty to use it to damage others. There's lots of rules about this. For example, opening a brothel on your own land is definitely not legal without considering how it affects the neighborhood.
Are you doing it just to spite scrapers, i.e. with "malicious intent"? If you have some other reason, you won't be guilty of intentional tortious interference.
They want search engines to index their profiles and provide organic search results links to their site, but then those same sites will require you to sign in when clicking a link to another public profile. You can search for that 2nd profile in Google and then view it without signing in, but not by clicking internal links. I've experienced this with Quora, LinkedIn, Instagram, FB and others. They want to have their cake and eat it too.
As a user of LinkedIn, I can pick which portions of my profile information I would like to be publicly available. This is not by default, so most people do not have it public. You can try seeing my profile without logging in. :-)
Your second point is interesting. I suspect the contract between hiQ and some company is that hiQ provides info on public profiles, and if LinkedIn removes all public profiles by requiring a login the contract would become moot. Just the same if I was to change my profile settings from public to private, hiQ wouldn't be in breach of their contract (nor would I).
Scrapping should either be legal or not. The fact that you have a contract to sell the content you assumed it's legal to scrape, should not matter. Too bad if you lose money
They were pretty much legally void even before this precedent was established. They are only valid when they don't violate any existing U.S. law. Any authority assumed beyond that is completely false.
I wonder if it has anything to do with the fact that the data is actually owned by LinkedIn users, and they expressed that they want their data to be publicly available?
Unlikely. The license to LinkedIn retains ownership, but the user's retention of information ownership doesn't compel LinkedIn to affirmatively do things with that data (i.e. LinkedIn isn't forced to vend the data to a given consumer if the user says so).
The license further goes on to clarify that LinkedIn will vend public data to search engines, but the definition of "search engine" is almost certainly assumed (by LinkedIn, at least) to be up to them.
There's a fun confusing fact about that series, which is that Steve Jackson was one of the creators and frequent authors. "Oh, Steve Jackson, the creator of GURPS and Munchkin!" you're saying. Nope, different Steve Jackson. But wait! That Steve Jackson did come along later and author a few books in the series. So now a bunch of those books have "Steve Jackson" listed as the author, and there's no way of knowing which one it is without googling.
There is also Fabled Lands (https://en.wikipedia.org/wiki/Fabled_Lands) which was pretty cool because you could make choices that could take you to another entire book if you owned it.
There were quite a few RPGs-in-book-form. I had a Middle-Earth one that involved delving solo into the Mines of Moria. But Fighting Fantasy seems to be the best-known, and may have been the first.
I'm not sure the algorithm would emit any answer in the case where everyone has an equal level of objective foundation for their subjective belief.
But it would probably help in cases where popular opinion is entirely misinformed about the subjective question, not having any basis other than (already misinformed) hearsay on which to form their own subjective opinion.
So, for example, if there was a musician who had an absolutely terrible song that somehow became the song they were best known for (being a "one-hit wonder" whose song wasn't really a "hit"), the public might believe that that song is their best song, since it's the only song of theirs the public has ever heard of. Experts (i.e. people who have heard more than the one song of theirs), on the other hand, would tend to agree that it's certainly not their best song.
(Given that example, I'm inclined to suggest that you could use this algorithm to determine when people are being judged overly-harshly for things, e.g. whether to ban someone from a website just because they've received a lot of reports about that person's behavior.)
The example uses it to spot a case where most people are wrong, but some large minority of people expect that most people will answer incorrectly, while themselves answering correctly. A large enough difference (10%, in the example case) between the "what do you guess others will answer?" and what people actually answered indicates the majority opinion is, in fact, wrong.
30% of the population holding extreme views isn't the same as any particular view being held by 30% and considered "extreme". They don't go into exactly what this figure means, but at the least we can imagine 15% of Americans holding one extreme and 15% holding the opposite extreme. Or even more credibly, it could mean that 30% of people hold at least one "extreme" view, but each individual "extreme" view is only held by 1-2%.
An article about a "growing body of evidence" that fails to link to a single study. Great.
"Smartphone use takes about the same cognitive toll as losing a full night's sleep"
Would love to see this study, since it sounds completely implausible (but really important if true). Without looking at the research, the only reasonable course of action is to assume it's false.
"Smartphone use takes about the same cognitive toll as losing a full night's sleep"
What does this even mean? Does it mean your cognitive performance after using a smartphone daily is reduced to the level it would be at if you hadn't slept at all the previous night? Does it mean cumulative smartphone use is as harmful as cumulatively skipping a full night's sleep? (Which is clearly false, but to me sounds like the most natural interpretation of the author's statement.) Is it about your cognitive performance right after getting off your phone, or does it still apply if you last used your phone several hours ago?
But given the lack of citation, I will follow you in assuming that it's false.
That quote frustrated me too. "Smartphone use" is such a vague term here.
They seem to reference this again later in the article, but it doesn't seem to clear things up much.
"All that distraction adds up to a loss of raw brain power. Workers at a British company who multitasked on electronic media – a decent proxy for frequent smartphone use – were found in a 2014 study to lose about the same quantity of IQ as people who had smoked cannabis or lost a night's sleep."
That's some serious spin. What the fuck is "multitasked on electronic media"? Where they employed at a firm where they switched from data entry to looking up stuff? I worked at a firm where people did that and yea it would look like their brain power reduced, because it was a terrible shit job (debt collection). They'd have to go from data entry, to calls to skip tracking lookups.
Depending on the type of work, that's not at all "a decent proxy." That's the kind of bullshit you read in meta-analysis papers. (Tip: if the introduction says its a meta-analysis, chuck that paper in the bin .. and set the bin on fire. Most meta-analysis papers are just lazy. You cannot control in vastly different experiments).
I agree a lot of this is FUD. Media has always been used to manipulate people. Emotional manipulation grew massively during the Edward Bernays era (the father of smoking advertisements and creating political and/or emotional draw to products). It may have changed form from Print to Radio to TV to phones, but it's still just more of the same manipulation.
The site linked to is one of the worst places on the internet for actual news/facts. I won't take this seriously until someone provides links to a reputable site.
David Rock cites a very very similar fact in "Your Brain at Work."
I only have the audiobook currently or else I'd quote directly, but the gist is that distractions from overcommunication temporarily drop IQs an average of 10 points: 5 for women, 15 for men, supposedly. The study originally revolved around email, and presumably text/sms/chat in the 13 years since the study).
Interestingly, following up on the quote he gave led me to this blog with an exchange [1] between the blog author and the original study author.
I'm pretty sure it's referred to later in the article:
>All that distraction adds up to a loss of raw brain power. Workers at a British company who multitasked on electronic media – a decent proxy for frequent smartphone use – were found in a 2014 study to lose about the same quantity of IQ as people who had smoked cannabis or lost a night's sleep.
Still not a reference, but you could probably find the study given that information if you wanted to. Or I'm sure the author would respond if you tweeted and asked him or something. If you really want to find out I'm sure it's possible.
What matters is that it ressonates with a common oppinion that smart phones are bad, so as long as it supports that somewhat popular position it doesn't need to be checked.
We all have biases, and in fact I don't use a smart phone because of some of the issues I have noticed using one previously, but if truth matters to you, you cannot just accept stuff just because you agree with it.
It means that if you are using your phone while driving, you are at just as much increased risk of incurring an accident as if you had not slept the previous night. Right, folks?
I'm surprised that this works. I always assumed Siri etc. would do some bare-minimum pre-processing of the audio input, if only to reduce noise, of which the simplest kind would be cutting frequencies that cannot be produced or registered by humans. Any insight into why this isn't already the case?
From skimming the paper it seems they're making the signal demodulate itself directly on the microphone - by the time it hits a low-pass filter and ADC, the audible frequencies are already injected.