According to Verlinde, there is no dark matter. Tell me how he's wrong point-by-point. Just because gravity is emergent doesn't mean it still cannot have effects similar to what's been perhaps wrongly attributed to dark matter. You can't just say "Bullet Cluster" and "Dragonfly 44" and piss all over it without explanation.
I'm in no way qualified to give any technical judgement on Verlinde's paper, but I'm aware that modified theories of gravity have been around for a long time without much success. Of course I'd like it if some elegant theory could be found that fit the evidence better than dark matter, but going by past attempts I'm not anticipating this one will.
1. The tester and another team member spent a year developing something that would intercept calls and relay them. Two problems with that: (1) two person years spent, (2) and that sounds like serious NIH (not invented here) syndrome. The problem that should have been solved was everyone spending the time to write better tests and changing code as needed. Instead, they spent a year on a workaround, invented in-house. Was there not anything else out there that did this?
2. The word is "focused", not "focussed".
3. Lack of detail: how exactly does it work beyond that basic diagram?
4. Where's the code for the project? Would it be useful to others?
However, I admire that the OP posted their experience, and it is useful information.
I believe "focussed" is acceptable, although "focused" is preferred in the US.[1][2]
Can you explain why this sounds like serious NIH syndrome? It looks like they built a system to cache service requests on top of an existing test framework. It seems specific enough that there might not be an existing method that fit well enough. The article is a bit light on details though, so I suppose it's hard to tell.
OP here ;-)
3 and 4: The article isn't at all about the tool we built, it's supposed to shine a light on the different kinds of tasks that Test Engineers do. Think of it as not very subtle job advertising.
1. The idea was definitely out there (some other commenter posted a link to pacts, which were a strong influence). Part of the process (and I hope this comes out in the article) was to try to find good ways to write better tests. We couldn't really (long story, let's just say "legacy code" to summarize it). So in the end we went for this technique. The whole process took a year to do, and as far as NIH goes, the existing implementations do not work in Google's setting. So we had to roll our own implementation. That did not take very long, though.
You're asking that every engineer at Google spend more time writing better tests? What's the math end up saying there? That if each engineer spend more than X minutes per year writing better tests, there would be an ROI. Where X ends up being a comically low number.
And you'd still probably want caching system at scale anyway, esp. given a monorepo. Think compute hours saved.
NIHS, as you're calling this, can often make sense at scale.
Although I thought the same when I saw it, I think that's unfair. You could just as easily say Symantec/Norton/Microsoft Defender/Windows/Google is CIA/NSA. Since everything's being watched, Kaspersky might be just as much FSB as it is NSA, or any other country that could get its mitts on it. Cisco was definitely completely NSA there for a while, because of the backdoor. China's got Lenovo and every smart appliance, phone, and TV made in China, CIA/NSA have Dell, Korea and Japan have all of their smart TVs, phones, appliances, etc.
I'm being a little sarcastic, as obviously, they aren't watching everyone all of the time, really, and most devices' backdoors, for those that have them, are unused. I'm fine with all of it, for the most part. Yeah, I'm not perfect, and I don't want all my info made public or sold, but we give up much, much more just by everything being online. Our banks, investments, to some extent our medical history, geneology, likes/dislikes, actions, schedule... it's all there. Each of us could be simulated with all of the info they have at this point, but they can't fully- yet. Now they just have to keep and mine all the data- which they do, but it's selective; it'll be a lot less selective about what is analyzed as time goes on. Then one or more AI's will decide what will happen to all that information, and us, if we don't all kill our planet or each other before then.
Best thing to do? Use the hell out of Kaspersky OS. Use Red Flag Linux. Just take all of your banking info and give it to the Nigerian whose been asking for it. If we all just gave up on security, what would humanity do with all of that trust? Ok, maybe not such a good idea... maybe paranoia can help you be a little more secure, for now. However, if it's open source and you build it yourself- then at least you could look at it, if you wanted, and had time.
Actually MS was the one to first challenge National Security Letters. Moreover if it was found out that they had left a back door in it would result in the loss of billions of dollars in business, from Europe alone. Whereas Kaspersky very clearly will always have a Russian government/corporate client.
Are you sure Microsoft was? I thought Yahoo did. I can't find the reference now (Google is full of references to Yahoo being the first to disclose an NSL), but I seem to remember something from 2008-2009 era and Microsoft's challenge was in 2012-2013 era if memory serves. Not that this matters much in the grand scheme, but I'd still love to know which company really has that distinction. Links appreciated if you find one!
I suspect you're right, that said Yahoo has lost all respect from me after they've basically turned over everything without question after Mayer became CEO. In all honesty it's very hard to tell because the challenges are secret and sealed for the most part.
Apple only kind of stood up to the feds, they happily handed over the terrorists icloud account and all things associated, they only refused to be compelled to write software knowing full well that the FBI already easily had the capability to break into the iphone. I would say that the government having some sort of restricted access to this data could be useful and important to society IF, and only if, there were better civil liberties protections in place. DHS has way too much power, and we heard from Edward Snowden that our data is often irresponsibly used.
No, I defend it because one is a liberal democracy that has civil rights and has promoted liberty and democracy throughout the world (despite many flaws). The other is Russia.
> I downvoted you because you are repeating the tired old trope about implicit biases and agendas.
Hardly tired- biases and agendas caused the deep political divide in the U.S. and Brexit in England. While it's not news to those in the know, it's certainly not something we should hide or stop talking about.
This post gets it close, but what we need is the best tools that can run in a web browser on a public library computer. Many don't even have a Chromebook or RPi.
And really what we need are the best tools that can run on the lowest end mobile phone. That way a lot more people could code: https://www.cta.tech/News/Blog/Articles/2015/July/How-Mobile... You could just give away free android phones- that'd be easier than trying to get everyone's clamshell phone to allow coding, and typing on a numeric keypad would be difficult. Also, even using Android- how will those people type? Hmm, maybe we just need to give them all computers or Facebook could set up computing centers. Actually, those luckily enough to get a ride into town and some money might afford 15 minutes in an internet cafe, so maybe if you wrote an app that could multifunction as a coding tutorial and universal web email client application at the same time maybe someone might use it. But definitely a low-end computer, so we're back again to serving some additional people realistically if we just have a webbased IDE that runs on a low-end computer. Should it support IE6 or IE7? Also, internet in many places is dog slow.
Still beyond that what is desperately needed is food, clean water, shelter, clothes, blankets, feminine hygiene products, medical assistance, infrastructure, the ability to grow food, doctors that live nearby with a constant supply of resources, and electricity. There are many parts of the world that don't have these things. Don't give them computers first. It would be intercepted before it gets to them or taken away from them or sold for necessary items. Many can't code, as they're struggling to survive.
> This post gets it close, but what we need is the best tools that can run in a web browser on a public library computer. Many don't even have a Chromebook or RPi.
This would further the centralization of the internet, which is a disaster waiting to happen. It's easier, it's cheaper, but it's also wrong.
We must teach people to take control of their computing power. This means giving them a personal computer they can control. They need enough computing power (low-end phones and the R-pi are more than enough), a decent I/O setup (screen and keyboard, mostly).
You also want to avoid proprietary software. I won't object too loudly about adults and companies using proprietary products, but when it comes to children, I have to go full RMS: proprietary software is wrong, teaching it is evil. In some cases (MS-Word vs LibreOffice), we really have no excuse.
(An ubiquitous counter-argument is, not knowing MS-Word makes it harder to find a job. This ultimately does not matter, because "proper" training will cause other* people to be unemployed. To reduce unemployment, we have to either create more jobs or share what we have (4 days work-weeks come to mind). Of course, since computing is mainly about destroying menial work, don't get your hopes up about job creation.)
Sadly most people don't want to, as they are mentally exhausted after pulling triple shifts and whatsnot to keep food on the table. They just want to vege out, with the web being the new cable TV.
Of course. Basic needs first. Let's start with a 4 days work-week, so people have time to breathe and think. (You can keep the same net salary, because if done in the whole country, this mechanically reduces unemployment and related insurances).
>> This post gets it close, but what we need is the best tools that can run in a web browser on a public library computer. Many don't even have a Chromebook or RPi.
> This would further the centralization of the internet, which is a disaster waiting to happen. It's easier, it's cheaper, but it's also wrong.
Sorry, honest question - I don't understand what you're saying here. Could you please explain?
Okay. The whole story is long and complicated, so I'm going to over-simply things.
At its core, the internet has no centre. It's just a network of computers, peers in the network, who can talk to any other computer in the network without restriction (besides bandwidth). There are complications, such as the distinction between an internet user and a full fledged operator, but as long as the operators give us a set of basic guarantees (no filtering, no snooping…) we mostly don't care.
Such a structure have far reaching consequences. The most obvious is the freedom of expression just got real. Anybody can publish most anything in the internet. This was not true before. Journals, TV, radio, all were controlled by a relatively low number of people. Just count how many people have ever published anything on paper, including research papers and letters to the editors. Compare that to the number of people who posted articles or comments on the web.
Basically, an a-centered internet fundamentally change the way we communicate. This necessarily changes the power structures. Historically, whoever had the power before such a change tended to resist, leading to lots of spilled blood. Just think how the printing press enabled the enlightenment, religious schisms, and revolution. Mostly for the better, but quite bloody still.
The first centre of the internet is the DNS system. To contact a computer, you need its IP address. It's like a phone number, only worse: it's longer, and you're going to talk to a lot of computers in practice. Hence DNS, to have memorable names. Thankfully, the DNS system is distributed into domain and sub-domains, so it's not so bad.
There are however other problems. See, to be a actual peer in the network, you need to be a client and a server. Publishing something on the internet mostly means giving it upon request, the way web servers do. You need a public IP address, and you need to be up all the time (modulo unavoidable downtimes). This was totally the case for university servers, but not home computers.
When the internet began to popularise in the 90's, home computers used to connect to the internet by making a phone call to the ISP. An actual phone call, that hold on the line and cost money (most phone companies charged by the minute and mileage). You couldn't be connected 24/7, it cost way too much. So, no web server for you, and no mail server either. You had to borrow someone else's.
To remedy this problem, internet providers also provided an e-mail address and a way of publishing stuff on the web. That was the first serious step towards centralization.
Later, the ADSL came up. A stands for "Asymmetric". This one doesn't monopolizes your phone line, because it uses higher frequencies. You can also connect all the time for a reasonable price. Alas, you still couldn't have your own server, because the upload rates were just not build for that.
They could have made it symmetric. They chose not to, I think for commercial reasons: they assumed (mostly rightly) that people are mostly consumers. They don't upload much, they mostly download. So let's sacrifice the upload for a better download. Makes total sense, right?
Not quite. One big application was significantly hurt by this choice: peer-to-peer file sharing, which needs both upload and download to work smoothly. Also, while people didn't in fact upload much, many did have an email address and their personal blog. They could easily be hosted at home, if the provider had their router host those instead of centralised servers.
In any case, it all went downhill from there. Gmail was such a big hit that now Google actively reads a scarily high proportion of all email worldwide. (They use algorithms instead of humans, which is worse, because it scales.) You distribute your videos with a handful of services such as YouTube, you talk to your friends through Facebook… And of course the ubiquitous search box, but that one has an excuse: we don't quite know how to decentralise that one yet.
This centralization is not neutral: there are terms of service you need to abide, DCMA takedowns, policies. This is a rather pervasive and direct attack on free speech. Privacy is hurt big time too: it's not exactly easy to secure a private chat connection these days. At the very least, you tend to make a request to some central server just to connect with your friend, such that even if the server doesn't log or record your conversation, it knows you attempted to connect.
This doesn't have to be. The truth is, if everyone had a private server (no bigger than an R-pi) at home, and a reasonably broad symmetric bandwidth, there would be no need for Gmail, YouTube, Facebook, or Medium. There's also a good chance there would be little need for centralised forums such as right here (moderation might be a problem, though). This would mean much harder mass surveillance, which we now know is not exactly a conspiracy theory, thank Snowden.
---
Programs decide what we can do, and how we can do it, all the time. Programmers write those programs, hereby influencing our lives. You really want control over the programs that affect you. This means Free Software, but this also means understanding the damn thing, and executing it on your own hardware whenever possible, not on some remote server. So, when teaching children the basics of computing, we must also demonstrate some good habits. The centralization of a web app is not a good habit.
I agree with the sentiment. Do note that people said exactly the same about mobile phones in Africa - shouldn't be a priority. The article you link wouldn't have been written had we said "no, first give them water and blankets". Maybe the benefits turn out to be surprising even in deprived areas?
As for web-based vs native, I can see it going either way. If connectivity is too unreliable and/or expensive then native will win. Many people in the developing world spend a lot of time with no credit on their phones.
The biggest problem I see with teaching people computing these days, which is only partly addressed by cheaper computing devices and IDEs, is the sheer number of tools and abstractions to learn. What order do you teach in? What if there's no teacher? Version control, testing, frameworks etc. are all great things, and a certain proportion of people will need to learn them at some point. For others it's overwhelming, and for most it at least hinders the teaching process. Too much "ignore this for now" can be demotivating and confusing. Yet, it's also not enough to start with just a BASIC prompt any more. People have now seen all the cool modern things you can do and want to do it themselves, so moving an X across the screen is not motivating either. I have found completely dedicated environments like https://scratch.mit.edu to be great for younger kids, but not sure how the rest of the learning curve should be shaped.
I think the gap between 1) and 2) for adults is gigantic, because of the issues I mentioned. I don't think you've really addressed the complexity of all the stuff surrounding programming.
I don't doubt there are long lists of links to starting programming, as there are for every conceivable subject (also "go to school" doesn't work for the disadvantaged/developing countries).
The point is keeping someone's enthusiasm alive and making good use of their time. In the "golden age" you would see, say, Space Invaders, and it would be within your ability (even as a teenager) to reproduce it, starting from the BASIC prompt and short manual that came with your computer. If you want someone to work efficiently now, on the things they want to work on (fun/visual/relevant to their life), they need to learn a lot more to get started. That's an improvement over the past in many ways, because it was simply not possible to work as efficiently then (e.g. no high-quality libraries, only basic line editing). In a way, people want more and can do more. But planning how to get there is harder.
>"Still beyond that what is desperately needed is food, clean water, shelter, clothes, blankets, feminine hygiene products, medical assistance, infrastructure, the ability to grow food, doctors that live nearby with a constant supply of resources, and electricity." //
I agree with the sentiment of that so much but it's not an either or thing in some situations.
Also, computers, in the form of feature phones mainly AFAIK, have helped some of the poorest people by enabling communications and information acquisition that can help with water, food, shelter, medical aid, etc..
If you RTF though you'll see it only addresses the disadvantaged in developed countries where the above are not widespread issues.
Having lived thru the era, admittedly in the USA not UK, the article clearly was not written by a gen-xer because "effectively cut children from lower-income households off from the computer revolution" is complete nonsense. If anything the tradition of constructing computers to operate for a decade was still in play while rapid technological advancement meant kids could have somewhat used computers pretty much free for the asking. This continued well into the 90s where it seems every linux install story began with "first obtain a free discarded computer" to install on. Now a days you have to spend $75 on a new pi and its required accessories, but in the "old days" you learned for free on a used machine.
I suppose no matter how bad it is for the environment, filling the worlds landfills with e-waste is at least good for the economy.
I posted more detail on another comment, but it should be possible to write a nodejs server app inside the package that will serve up the electron HTML to the web app (i.e. with command flags like [--server --port $port]). You'd probably have to implement the keystroke handler and a renderer for the menus, but otherwise it _should_ work.
Of course, I have no idea how much work would really be involved (although I imagine quite a bit), and you'd end up with something that resembles Cloud 9 or Mozilla Che, albeit with VS Code's extension library.
It just feels like something that would be better built directly into Electron though, so it could work for e.g. Atom as well.
Some computers are too old and slow to run the JS in the client as the IDE. Either the browser is old and incompatible, or it would just waste precious internet cafe/library computer time for each action.
Either the JS needs to be fast and compliant with much older browsers, or you just do a webform with old HTML tables as structure and so everything server-side.
Then you have the opposite problem in that those with newer computers wouldn't want to use it. So you might have to provide multiple levels of support, similar to how Gmail and OWA have a light client and a heavy client.
By far the best, easiest on-ramp for these users is a good web-based development environment. It will run on a public computer, or an unmodified Chromebook (preserving those security properties), and they are useful for everything from first-steps learning to Real Programming.
Of course I'm biased, because I make one (https://anvil.works) - but there are loads of great platforms out there. You can learn and practice basic Python with Trinket (https://trinket.io), or command the full complexity of a Linux workstation system with Cloud 9 (https://c9.io). (With Anvil, we aim somewhere in the middle - useful, deployable, earn-or-save-money-with-them apps, without the full horror of the Web. But my point is that there are many good choices available.)
Or maybe we need to get Chromebooks/RPis into public libraries instead? I admit RPis are probably a bit of a stretch - given the potential for malicious usage, but Chromebooks seem to be an ideal fit, for much the same reasons they're good for the education market.
(Note: fredley responded when my comment was just the first line about public libraries.)
I think this gets too expensive. Instead, I think you need to just serve the lowest denominator and help those that don't even have that with other resources.
> Sometime late in the 1960s, in the countryside of Vermont, my sister and I saw in the evening sky three round lights, apparently far-off, perfectly still and unchanging, each the size of a thumbnail held up before the eye. We hadn’t seen them appear—they were just there. They remained for a few moments, and then with instantaneous acceleration vanished over the horizon: in the blink, that is, of an eye.
Which was a jet that had been coming right at them that changed course 90 degrees. Probably the source of 99% of sightings if not more. Anything with three lights (white, red, green) or similar lights that appear to be "rotating" (because they blink, it causes the mind to come up with explanation of the blinking) is likely a human aircraft.
A sufficiently advanced alien civilization tends to fly without FAA-approved lighting.
I lost my high school friends after high school, lost my college friends after college, I got new friends from 30-40, but 40+ have been losing some of them.
I have had some of the same. I think that some of my new friends currently (i'm in my late 30's) is because I moved out of the US to northern Europe. I found things in common with other immigrants in language classes.
If the OP chose to do this, they should fully commit to it as if there is no other way if they want to succeed.
But, if the OP thinks they are crazy (and they definitely have doubts if they posted this here), they should indeed immediately take responsibility to hire a replacement and start interviewing elsewhere, hopefully in such a way that it will limit CV damage.