> The amount we can extract is tiny compared to the volume of energy put into the air every day by the sun.
Here's a study in how much wind power you can extract before adding more wind turbines doesn't produce more power overall.
At the 100m mark (as opposed to the whole atmosphere up to the jet streams), they calculate 250TW.
Total human electricity generation is well under 5TW (30PWh/yr, out of around 180PWh/yr of total energy), so we could supply all electricity from wind and still leave 98% of the "extractable" global wind potential in the air, which is itself less than all wind energy because of the Betz limit.
Then again, in terms of direct threats to birds, ignoring climate change, things like powerlines (25 million birds a year in the US), and more significantly air pollution (200-2000 million), windows (2000 million) and cats (1000-4000 million) even very high estimates of 2 million a year including things like habitat destruction and extra powerlines (more direct estimates from collisions are around the 500k mark) from wind turbines make them in turn seem like the lesser of the problems.
Which is not to say it's not an important factor, especially as they affect very specific kind of birds disproportionately, but it's not like wind turbines are primarily bird-killing devices, and it's even possible they may be net benefits to birdkind by, say, reducing air pollution.
It's quite obvious if you look at how a watermill is built. Upstream, there's a wier which diverts water along a culvert into a pond at the level of the weir. The watermill is then driven by water falling from the pond to the level of the river at that point. Depending on the gradient of the river, this can be some distance from the weir. If the weir diverts on average a substantial portion of the flow, anyone wanting to use the river between the weir and the mill will find there's much less water for the purpose. Many British rivers in areas where there were (literal) cottage industries like little mesters are not that big (only a few metres across and under a metre deep) so you can see where conflicts could arise. And also they do dry up to almost trickles in dry weather.
How did you manage to get it to not bog down in speed? Every tablet I've ever had has become barely functional by year 3, and that includes the iPad Mini.
But yes, even the first gen iPad Minis were really nice screens and form-factors.
Meanwhile my Kobo Clara is just trucking along with no obvious change in battery life or responsiveness (despite the standard e-reader application processor with its power-sipping weediness).
Minutes to start and get the book open, and stuttering when doing things like choosing a book and turning pages was eventual my experience on all the devices: iPad Mini, Nexus 7, and a Samsung Tab 10.1. To be fair, someone else I know had an iPad Air for years and it was completely fine, so maybe they're the only tablets that actually work (or the Mini was a lemon and Androids just are bad?). Or maybe tablets are better since then: it's been a very long time since those devices went into a drawer, never to be used again.
The problem is that cutting IT and similar functions to the bone is really good for CEOs. It juices the profits in the short/mid term, the stock price goes up because investors just see line go up, money goes in, and the CEO gets plaudits. There's only one figure of merit: stock price. What you measure is what you get.
It's only much later that the wheels fall off and it all goes to hell. The hack isn't a result of the CEOs actions this quarter, it's years and years of cumulative stock price optimisation for which the CEO was rewarded.
And you can't even blame all the investors because many will be diluted and mixed though funds and pensions. Is Muriel to blame because her private pension, which everyone told her is good and responsible financial planning, invested in Co-Operative Group on the back of strong growth and "business optimisation intiatives"? Is she supposed to call up Legal and General and say "look I know 2% of my pension is invested in Co-Op Group Ltd and it's doing well, and yes I'm with you guys because you have good returns, but I'm concerned their supermarket division is outsourcing their IT too much, could you please reduce my returns for the next few years and invest in companies that make less money by doing the IT more correctly?"
And you need a process to follow. You can't just have nearly 4000 supermarkets ringing up HQ at random and reading out lists of 1000 items each. Then what? Back when a supermarket chain did operate like that, the processes like "fill in form ABC in triplicate, forward two to department DEF for batching and then the forward one to department GHI for supplier orders and they produce forms XYZ to send to department JKL for turning into orders for dispatch from warehouses". And so on and so on. You can't just magic up that entire infrastructure and knowledge even if you could get the warm bodies to implement it. Everyone who remembers how to operate a system like that is retired or has forgotten the details, all the forms were destroyed years ago and even the buildings with the phones and vacuum tubes and mail rooms don't exist.
Of course you could stand up a whole new system like that eventually, but you could also use the time to fix the computers and get back to business probably sooner.
But I imagine during those 3 weeks, there were a lot of phone calls, ad-hoc processes being invented and general chaos to get some minimal level of service limping along.
I agree, although it seems like a failure of imagination that this is so difficult. The staff will have a good understanding of what usually happens and what needs to happen. What they are lacking is some really basic things that are the natural monopoly of "the system".
Perhaps we need fallback systems that can rebuild some of that utility from scratch...
* A communication channel of last resort that can be bootstrapped. Like an emergency RCS messaging number that everyone is given or even a print/mailing service.
* A way to authenticate people getting in touch using photo ID, archived employee data or some kind of web of trust.
* A way to send messages to everyone using a he RCS system.
* A way to commission printing, delivery and collection of printed forms.
* A bot that can guide people to enter data into a particular schema.
* An append only data store that records messages. A filtering and export layer on top of that.
* A way to give people access to an office suite outside of the normal MS/Google subscription.
* A reliable third party wifi/cell service that is detached from your infrastructure.
* A pool of admin people who can run OCR, do data entry.
Basically you onboard people onto an emergency system. And have some basic resources that let people communicate and start spreadsheets.
part of the problem with emergency systems is that whatever emergency system is going to take you from zero to over capacity on whatever system it is, particularly if you are requiring communication from suddenly over-burdened human staff working frantically, and these processes may break down because of that.
> Everyone who remembers how to operate a system like that is retired or has forgotten the details
Anyone who’s experienced the sudden emergence of middle management might feel otherwise :) please don’t teach those people the meaning of “triplicate,” they might try to apply it to next quarter’s Jira workflows…
The whole point of a shared_ptr is that you can't delete it out from under other users. You can do exactly what you suggest with a weak_ptr.
A shared_ptr to an object with a deinit() is rather like a weak_ptr in that someone else can "delete" it, and you should check if it's really there, except it can still have some information about things like deinit failures rather than just telling you that the object is deleted.
In this case it is clearly intended to be deleted under users, so why not use the language feature for that (destructor) instead of inventing yet another level of book-keeping.
You can, as long as you don't need to know if/how something within the deinit failed. If you literally just want to know if the item is gone, that's exactly a weak_ptr.
File streams, say, do this by expecting users to close() the file themselves if they care if exceptions happen, but if you destruct without doing that first, any exceptions are caught and don't make it out of the destructor, so they're just gone and you'll never know.
With a bit (OK quite a lot) of fiddling, you could probably remove the CCD and feed the analog data into the controller, unless that's also got a crypto system in it.
Presumably if you were discovered you would then "burn" the device as its local key would be known then to be used by bad actors, but now you need to be checking all photos against a blacklist. Which also means if you buy a second hand device, you might be buying a device with "untrusted" output.
Any problem that requires cryptographic attestation or technical control of all endpoints is not a solution we should be pursuing. Think of it as a tainted primitive. Not to be implemented.
The problem of Trust is a human problem, and throwing technology at it just makes it worse.
I'm absolutely in agreement with that. The appetite for technical solutions to social problems seems utterly endless.
This particular idea has so many glaring problems that one might almost wonder if the motivation is less about "preventing misinformation" or "protecting democracy" or "thinking of the children" or whatever, and more about making it easier to prove you took the photo as you sue someone for using it without permission. But any technology promoted by Adobe couldn't be about DRM, so that's just crazy talk!
That fixes the problem of content being manipulated and then the original being discounted as fake when challenged.
It doesn't do a whole lot for something entirely fictional, unless it becomes so ubiquitous that anything unsigned is assumed to be fake rather than just made on a "normal" device. And even if you did manage to sign every photo, who's managing those keys? It's the difference between TLS telling you what you see is what the server sent and trusting the server to send the truth in the first place.
> Stack by default - Unique ptr if needed on the heap - Shared ptr if needed to share ownership
Sounds about right. Shared ownership is fairly rare though, and you often only need shared access (reference/pointer if nullable) and can provide other, more explicit, ways of managing the lifetime.
> unique ptr is zero cost after make_unique()
Kind of, but compared to the stack, it could cause caching inefficiency because your heap-allocated thing could be almost anywhere, but your stack-allocated thing is probably in the cache already.
Here's a study in how much wind power you can extract before adding more wind turbines doesn't produce more power overall.
At the 100m mark (as opposed to the whole atmosphere up to the jet streams), they calculate 250TW.
Total human electricity generation is well under 5TW (30PWh/yr, out of around 180PWh/yr of total energy), so we could supply all electricity from wind and still leave 98% of the "extractable" global wind potential in the air, which is itself less than all wind energy because of the Betz limit.
https://www.pnas.org/doi/10.1073/pnas.1208993109
reply