Hacker News new | past | comments | ask | show | jobs | submit | elehack's comments login

UUID v5 is quite useful if you want to deterministically convert external identifiers into UUIDS — define a namespace UUID for each potential identifier source (to keep them separate), then use that to derive a V5 UUID from the external identifier. It's very useful for idempotent data imports.


Both UUIDv3 and UUIDv5 are prohibited for some use cases in some countries (including the US), which is something to be aware of. Unfortunately, no one has created an updated standard UUID that uses a hash function that is not broken. While useful it is not always an option.


Could you provide an example of such a prohibition? I've never heard of that before.

I doubt that the quality of the hash function is the real issue. The problem with MD5 and SHA1 is that it's easy (for MD5) and technically possible (for SHA1) to generate collisions. That makes them broken for enforcing message integrity. But a UUID is not an integrity check. Both MD5 and SHA1 are still very good as non-cryptographic hash functions. While a hash-based UUID provides obfuscation, it isn't really a security mechanism.

Even the existence of UUIDv5 feels like a knee-jerk reaction from when MD5 was "bad" but SHA1 was still "good". No hash function will protect you against de-obfuscation of low-entropy inputs. I can feed your social security number through SHA3-512 but it's not going to make it any less guessable than if I fed it through MD5.

Moreover, a UUID only has 122 bits of usable space. Even if we defined a new SHA2- or SHA3-based UUID version, it's still going to have to truncate the hash output to less than half of its full size. This significantly alters the security properties of the hash function, though I'm not sure if much cryptanalysis has been done on the shorter forms to see if they're more practically breakable yet.

There is one area where the collision resistance of the hash function could be a concern, though. If all of the inputs to the hash are under the control of a potential attacker, then maliciously constructed data could produce the same UUID. I still wouldn't think this would be a major issue, since most databases will fail to insert a duplicate key, but it might allow for various denial of service attacks. This still feels like quite a niche risk, though, and very circumstance-dependent.


Systems where a sophisticated attacker may engineer collisions are precisely why UUIDv3/5 are prohibited. SHA1 is deemed broken by some government authorities and not to be used in any critical systems, including as UUID (this is where I’ve seen it expressly prohibited). The entire point of UUIDs in many systems is that collisions should be impossible, system integrity is predicated on it. Many systems exist in a presumptively adversarial environment.

Similarly, UUIDv4 is also prohibited in many contexts because people using weak entropy sources has been a recurring problem in real systems. It isn’t a theoretical issue, it has actually happened repeatedly. Decentralized generation of UUIDv4 is not trusted because humans struggle to implement it correctly, causing collisions where none are expected.

There are also contexts where probabilistic collision resistance is disallowed because collision probabilities, while low, are high enough to be theoretically plausible. Most people aren’t working on systems this large yet.

Ironically, there are many reasonable ways to construct reasonable and secure 128-bit identity values but the standards don’t define one. Some flavor of deterministic generation + encryption are not uncommon but they are also non-standard.

That said, many companies unavoidably have a mix of standard and non-standard UUIDs internally. To mitigate collisions, they have to transform those UUIDs into something else UUID-like, at which point it is pretty much guaranteed to be non-standard. Not ideal but that is the world we live in.


Ok, that makes sense. As far as I can tell, even truncated to "just" 122 bits, there's still no known way to generate a SHA-256 collision, so the MD5/SHA1 versions are comparatively vulnerable vs an hypothetical SHA256 UUID version. However, it's starting to feel like UUIDs may not be long enough in general to meet the need for secure, distributed ID generation.


Disclaimer: I am a CS professor.

I don't think AI advancements will cause a problem for the value of the degree (or rather, if they do, then it wasn't a very good MS degree). The value of formal university CS education done well, at both BS and MS levels, is learning skills in a context that integrates those skills into a knowledge framework that transcends any particular technology and hopefully outlasts several trend changes. The specific ML algorithms you would learn in an ML-focused MS will likely be out-off-date soon; the training on problem formulation, data preparation, fundamental limits of learning, and the theory of how ML works will not only outlast many technology shifts, but give you a good framework for navigating those shifts and integrating new advances into your knowledge.

There are likely many programs that would not provide this kind of foundation. But in understanding in general the value of an MS, this is how I would advise a student to think about it. (and on MS vs BS, BS usually provides some opportunity for specialization but is very much a generalist degree; an MS provides more opportunity for specialization and credentialing on that specialization.)


asks a drug dealer How do you feel legalization will impact your business? /sarcasm

Disclaimer: I dropped out, but i do wish i finished just because it's sad to now be 36 and I hate leaving things undone.

In all seriousness, i think higher ed has issues to resolve regardless of whatever AI does to it. The ongoing imbalance between the value one can extract from a degree and what you get out of it has been mostly impacting students other than CS or other engineering degrees, but with a slower economy we may end up sucked into the issue other fields have long suffered from. Speak to anyone in the environmental field, hard to believe this is /the issue/ of our time yet we value is so poorly.


>The value of formal university CS education done well, at both BS and MS levels, is learning skills in a context that integrates those skills into a knowledge framework that transcends any particular technology and hopefully outlasts several trend changes.

While I don't disagree with your main point re the value of a CS degree, this is the same argument verbatim given by every English, History, and Underwater basket weaving professor.


They’ve also got a point. The skills may not be technologically valuable, but they can teach critical thinking and give broader context for life. Philosophy majors tend to do better than average salary wise as well.

That said I also believe many fields have gone bunkers. The whole everybody needs a degree also creates incentives for degree factories.


Outside of ML/AI what would you say are areas of CS in which a lot of active research is being conducted?


Programming language theory and formal verification have been relatively hot during the last 10-15 years and show no signs of slowdown. Still, a relatively niche area.

Also the intersection of CS, probability and statistics is a very interesting area to work on. Less trendy than deep ML, but really practical. See e.g. Stan, Pyro, Andrew Gelman's books, etc.


Thanks for the insight. My Software Quality prof gifted me a copy of one of Gelman's texts but I haven't had time to take it in; I should change that...

It's weird to me that formal verification isn't more widely used; I would think it would be common at least in safety critical systems development.


There's a lot to critique in publishing and associated costs, but this tweet is unfortunately factually wrong.

From the linked article, ACM's publication costs are $10.9M, not $33.7M.

One of the ACM's major publication initiatives over the last 3-5 years has been an overhaul of their publication templates and publication workflow, to ensure greater consistency in publication formatting, improve accessibility, and archive publications in more future-proof formats. There are also the ongoing costs of creating and indexing metadata (ACM tracks more metadata than arXiv, including resolved citations), preservation (ACM buys failsafe perpetual access services from Portico, arXiv has mirrors at other university libraries).

Should it cost $10.9M? I am not sure. Does it cost a lot more than what arXiv does? Yes.

For a costing exercise: the service ACM buys from Portico is archival and republication. If ACM goes insolvent, Portico flips on their archive and the content remains available. How would you price this service, knowing that when it is actually needed, it's because your customer can no longer pay bills, and you now need to take up their hosting (and all related costs) for approximately forever with no further revenue? I think a network of university libraries would be a more cost-effective way to provide this service, but it's the kind of thing that people working on publication and archival professionally think about, and that factors into the cost of professional archival-level publication.

(I cannot speak to IEEE.)


> their publication templates and publication workflow, to ensure greater consistency in publication formatting, improve accessibility, and archive publications in more future-proof formats

Publication workflow, formatting and accessibility? For every paper I’ve done I just send the ACM a final PDF produced myself from a LaTeX template that hasn’t changed in years. What’s the workflow for taking an already final PDF from authors and uploading it to a file server?


That workflow has changed in the last few years.

- Brand new templates (introduced about 5 years ago, the LaTeX template has had multiple updates per year since then)

- Workflow that makes use of the source (or possibly codes the source embeds in the PDF, but you have to provide LaTeX source to ACM these days)

- Papers now render in both PDF and HTML (and the HTML looks quite good), this started showing up within the last 1-2 years

- Papers are archived in an XML-based format (something called JITS, I do not know details) to facilitate rendering to PDF, HTML, ePub, and other formats not yet devised


That doesn't seem too impressive. It's essentially a workflow that a few universities could band together and replicate via an open source project relatively easily IMHO.

As an example, Pandoc can already handle 90% of this type of workflow by itself (converting Latex to various XML formats). An open source project shared among a few universities or developed by single body like the ACM and used among dozen's of publications and fields. Even two or three full time people working on this would cost much less than $1M per year.


That sounds pretty counterproductive. So now authors, in addition to keeping up on their research, need to keep up on the updates to the ACM's LaTeX stylesheet? And there's every chance that the version that is formatted well with the ACM stylesheet when you initially submit will have formatting bugs six months later because the template got updated? And now you have a whole new toolchain to debug when the HTML version of your paper misaligns your tables? And maybe the HTML version that looks fine today will get mangled in 2028 after you retire and they update the CSS, as has happened with most of the New York Times articles?

It sounds like the ACM has a really different set of priorities than libraries and researchers do, one that values increasing headcount over guaranteeing permanence.


I'm not sure how it works at ACM, but often, it's people retyping the contents of your article into a JATS-XML template and adding additional metadata (authors, date of publication, perhaps who funded it, etc.), which is then used to generate several outputs (e.g. PDF, HTML, but also citation lists, etc.).


>The Journal Article Tag Suite (JATS) is an XML format used to describe scientific literature published online. It is a technical standard developed by the National Information Standards Organization (NISO) and approved by the American National Standards Institute with the code Z39.96-2012.

https://en.wikipedia.org/wiki/Journal_Article_Tag_Suite

>LaTeXML is a free, public domain software, which converts LaTeX documents to XML, HTML, EPUB, JATS and TEI.

https://en.wikipedia.org/wiki/LaTeXML

The wonderful thing about standards is that there are so many of them. And each one has variations.


> people retyping the contents of your article

Wow. Well I can imagine that’s expensive.


Thank you for the correction.

IEEE's $193m is where we should focus our attention, when it comes to this expense line.


I agree. I have no idea what IEEE is doing that costs that much. And while I don't take as hard a line against them as I do against Elsevier, I have never published with them and don't currently have any plans to change that.


I'm not sure how many articles are published a year in ACM [1], but the answer seems to be a few 10,000s. That's a per-article publishing cost of a few hundred dollars, which is not unrealistic to me.

[1] The ACM Digital Library claims 2.8 million published over 84 years, or about 33,000/year if divided equally over the years (which is laughably false). Some number of that quantity may include citations for keynotes or posters, which aren't really research papers, but I don't have a good handle on that rate.


Annual report 2019 gives some details - 34,000 full text articles were published in the DL. This will exclude non-archival content like keynotes, posters, etc if conference organisers provide correct metadata.


Backblaze can back up arbitrarily large local drives, but does not allow you to set network drives as backup sources (for precisely this reason). It's fine with the local drive being shared - our desktop's big storage drive is exposed over the network - but it can detect and refuse mounts from other machines. I don't know what it does with an iSCSI drive, haven't tried.

I think it's harder to detect network mounts in a way that wouldn't have a bunch of false positives on Linux.


It isn't just a change of server, it's an entirely different back-end infrastructure. To Do is a modern UI on top of Exchange tasks.


Google did not create reCAPTCHA. They bought it; it was started by Luis von Ahn, who went on to create Duolingo.

When reCAPTCHA was created, the alternative was CAPTCHA, which tried to impede bots but did not generate any social benefit. This was the genius of the original reCAPTCHA concept: the time taken to 'confirm humanity' could be channeled into the socially-useful endeavor of digitizing books. Capture some of the heat emissions of impeding bots for a useful purpose, rather than letting it all go to waste.

Now, yes, Google is using it to train their self-driving car AI, and there's a bunch else happening in it to connect to Google's surveillance apparatus. There's much to legitimately criticize there. I personally don't view training Google's proprietary AI as the same kind of intrinsically altruistic purpose as digitizing the world's pre-digital books.

But putting the entire concept on blast with erroneous history that can be corrected with about 60 seconds on Wikipedia doesn't help the argument at all.


I’m hereby inventing a new rule called Chesterton’s Wild Boar fence, the essence of which is people who don’t have gardens, or don’t hang out at night, would always complain about wild boar fences, as they lack any awareness of the beast and its damage, or they downright believe its mere existence is a myth.


I second this notion. I've been using sharks as an example: Sharks aren't dangerous when swimming, because we don't swim in deep water, because sharks are dangerous. Shark attacks in deep water beaches are fatal at roughly the same rate as riding a bike without a helmet. [1]

The risk of sharks is tempered by our experience with them. Few people swim in deep water beaches (because they have signs saying "Danger! Sharks!") And those that do typically take appropriate precautions and maintain awareness.

Sharks don't want to eat you and do quickly let go of swimmers they attack, but that's irrelevant because the damage has already been done. When I was young, a large amount of education was put into stating that shark attacks were rare, and it's true both in absolute numbers and by comparison with how feared they are. Jaws and knockoffs spread irrational fear in the 70s/80s, and my early 90s childhood came with the counterpressue there, but that counterpressue caused many in my peer group to misunderstand the risks. Shark attacks drop off hard to zero if you're swimming in shallow water. Even at 10 meters, which is not uncommon for surfers, they are a real risk. But surfers spend little time at 10 meters out. All of this forms a balance.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3941575/


I think this can be summed up with a single word: incomprehension. Or in the latter example: ignorance


I've been toying with this idea under the general rubric of "manifestation", that is, the distinction in understanding of things which one has experienced manifestly -- directly -- and those one has not.

This covers any number of circumstances -- why travel is broadening, why rare / degenerative / mental health conditions are so frustrating to explain to healthcare providers / family / others, trying to communicate specialised knowledge, historical bias, what the old know that the young do not (and rarely, vice versa). Tacit vs. explicit knowledge. Theory vs. experience.

There's probably far better existing terminology than what I've come up with (Hume, Kant, and Berkeley address this, as does Plato, within philosophy). But it's also a major concern in a highly diverse yet tightly interconnected world.


Saying it "impedes bots" is a little generous; it impedes humans as well. Or rather: it works on a spectrum where bots are at one end (fully obstructed), easily tracked humans at the other (free entry), and humans who disable tracking devices and/or eschew Google services somewhere in the middle (allowed to pass after much hassle).


It definitely impedes Firefox users as opposed to Chrome users.


Yes it's the only reason I can't use Firefox mobile without having to fire up Google chrome from time to time as a lot of sites block ff but not chrome thanks to the stupid Recaptcha


To be fair, having a tracked history does make it easier to prove that you are human.


reCAPTCHA generally only works for me after I do it 3-4 times. Purely because I use a VPN. reCAPTCHA v3, curiously, works just fine when I'm using a VPN (if I allow it to run in the first place).


Recaptchas audio option for the visually impaired (the headphones icon) is relatively very fast compared to clicking store fronts. Recaptcha will sometimes deny me the audio option, for unspecified reasons which I venture to guess could be challenged in us courts under the ADA (disabilities act.)


Yep, it was originally run by Carnegie Mellon (as you mentioned, by its creator Luis von Ahn and others).

This article also doesn't seem to touch on the newer reCaptcha that tracks you everywhere on a website (you'll notice a little blue box on the bottom right with the logo where this happens), not just on login or user input pages.

There is a lot to criticize about reCaptcha, including privacy concerns for sure, and there were some other posts about it on HN before.


So basically it has been perverted and acts in ways that harms people now, like most things Google touches.


I wonder if in some jurisdiction Google shouldn't pay money for forcing people to train their AI. I imagine it could be possible to do that in Germany, or under some EU laws.


Perhaps in Germany Google can charge for their services, but waive the fee if they solve captchas.


Am I the only person here who always entered absolute nonsense for the scanned word? The original reCaptcha had two words, one which was clearly generated and another which was clearly scanned - to "solve" the captcha all you needed to do was to enter the generated word correctly, the other could be literally anything. So I always entered banana or something similar for literally everything.


You're not the only person -- I have a friend who did this, also generally inserting silly words in the side he guessed was scanned from a book.

I have a hunch that von Ahn knew this would happen and the same scan is shown to multiple users before a word is chosen.


reCAPTCHA v2 blocks [1] people with disabilities from accessing basic services on the web, such as registering to vote, paying utilities, filing taxes, or accessing medical services. This practice is likely illegal, and the sites which facilitate it may be legally liable.

reCAPTCHA v3 has no user interface, it only returns a score upon which the site operator can act, often delaying or blocking access [2] to services. In this case the responsibility falls entirely on the site, while Google is no longer at risk of being found liable for the damage caused by its discriminatory service.

reCAPTCHA v3 works best when it is embedded on every page of a site [3]. The service collects detailed interaction data on every website you visit which has implemented it. The extent of tracking is similar to Google Analytics, but you cannot block it, otherwise you lose access to large portions of the web.

The collected data is highly sensitive, it not only contains your browsing history, but a detailed snapshot of your actions on sites. Mouse movements can reveal health issues which affect your motor functions, and your interests and desires are laid bare based on how you interact with content.

Google must be compelled to disclose in the reCAPTCHA privacy policy what data is collected and how is that data used. Journalists have asked Google for years to clarify how the data collected by the reCAPTCHA service is being used, and their answer is always the same: we only use your data to provide the reCAPTCHA service, and it is not used to personalize ads.

The problem is, those are just words from their PR department, the legally binding documents are the privacy policy and the terms of service. reCAPTCHA uses the same privacy policy like the rest of the Google services, which gives them the right to use your data for ad personalization.

You must resist against adding reCAPTCHA v2 and v3 to your sites. There are alternatives [4] which could offer the same level of protection for your services, when used the right way. Their implementation may not be as convenient as reCAPTCHA is, but that is the price you must pay to prevent Google from mining our personal data and our every interaction on the web.

People are forced to hand over their personal data to Google at all times, otherwise they face losing access to services, and being excluded from societal processes that are increasingly happening exclusively online.

This is where privacy rights and human rights are violated, and it is upon all of us to make our voices heard, so that exisiting legislation is enforced, and new laws are put in place to prevent companies from abusing and exploiting us.

Handing over our data to Google must not be a condition to fully participate in society.

[1] https://github.com/w3c/apa/issues/25

[2] https://news.ycombinator.com/item?id=20295333

[3] https://developers.google.com/recaptcha/docs/v3

[4] https://www.w3.org/TR/turingtest/


Couldn't one use u2f as a captcha alternative, obviously without information about the stick itself, only the batch attestation, and then throwing the registration in the bucket? After all it does need an interaction in the meatspace and sure a bot could be engineered to trigger it, but you can't just relay the challenge somewhere and have someone else clear it for you and even if you have a lego construction or whatever to clear your captcha, it's FAR slower than having many people on a solving service help you.


exactly. google should be banned from all online public services of any kind, since they can't be avoided. it's unreasonable to expect people to shop around for a town to live in that doesn't, and never will, use privacy-invading google services like recaptcha.

i'd even support a ban for other core services like utilities and banking that may not be public entities.


This brings me a thought: What if I someone created a service that channels Amazon Mechanical Turk tasks as CAPTCHAs so that you(r website) could make a buck of those people solving captchas?


They did the same thing to collect walking data using the game Ingress, and Goog-411 to collect voice data.

Well specifically, Niantic, which was a google internal thing.


I could swear that close to the launch of Ingress some Niantic employee said something along the lines of, "We're not actually collecting much data. It's all secretly an evil plan to get nerds to exercise," but I can't find a source.


I believe you, but I'm also confident that your quotation was said in jest.


Could an organization do the same thing, but as a non-profit with open datasets? This way everyone benefits.


> but putting the entire concept on blast with erroneous history that can be corrected with about 60 seconds on Wikipedia doesn’t help the article at all.

Nor does an entirely fallacious premise. ReCAPTCAH v3 is entirely transparent and non invasive to users. In fact it’s retroactive to help the site admin figure out what to do with the score:

https://developers.google.com/recaptcha/docs/v3


>ReCAPTCAH v3 is entirely transparent and non invasive to users

Except when you don't opt into google tracking you by blocking third party scripts, in which case your life still gets to be hell.


> non invasive to users

... who are opted into fully and completely to all Google tracking and have previously participated in Google's ecosystem.

Pretty odd definition of "entirely fallacious".


That's what they say, but it rarely works out that way for me. And yes, it's probably because I always block all tracking, and use VPNs and/or Tor.


> non invasive to users.

Except for the invasion of my time and attention, used to train Google's AI to get better at recognizing traffic signals. I took that as the main point of the article.


That's v2.

v3 is "invisible" and is supposed to be deployed to every page on the site, and the site is the one who decides how to punish you for not matching their normal audience.


Not just invisible but unlike "invisible recaptcha" which was kinda between v2 and v3 which does spawn a challenge on its own, but v3 is entirely non interactive and as you said the site/admin decides the punishment.


Yes. Bandits will often converge more quickly to the optimal strategy, but it is much more difficult to understand why that strategy is optimal and generalize from the bandit outcomes to predict future performance and performance of other strategies.

It isn't impossible - bandits are seeing adoption in medical trials to avoid precisely the problem discussed - but the standard experiment design and analysis techniques you learn in a decent college statistics class or introductory statistics text no longer apply. That's one of the beauties of A/B testing: while it does require substantial thought to do well, the basic statistics of the setup are very well-understood at this point.


But for results to generalize or to understand why, the confounders must be accounted for in the randomization. This is really hard to do well -- there are often subtle influences that aren't sufficiently understood how they impact these non-linear systems. What makes someone convert? A million different factors; changing the color of a button in one context doesn't necessarily tell me much about how people would respond to that experience in another context.

It's easy to underestimate how complex things are, because we only see some superficial aspects of e.g. a user/software interaction model. This flaw is down to how our brains work -- ref "What you see is all there is".


I disagree. I’ve spent a lot of time staring at bandit outcomes and usually they match some sort of intuition of why a variant might be exceptional.


That could be post-hoc reasoning, though. It would be interesting to pre-register your hypotheses, or see whether you could tell bandit outcomes from random ones.


Sure it’s post-hoc reasoning, but it doesn’t matter because I’m not trying to invalidate a hypothesis.

I’m looking for variants that win. When I find one that wins I look at it and try to add more of the same flavor to the product.

This process works.


This is literally the logical fallacy. You could get lucky. Maybe you have obvious gains to chase. But bad logical arguments are bad because they never work forever. They are corrupted heuristics that can get you in trouble without critical thinking.

Edit: added in forever. Phone dropped some wording I originally had. I think.


Call it a genetic algorithm if you like. I’m looking for incremental wins in a world of infinite possibilities, not truth.


Incremental wins can still lead to dead ends. My phrasing was off in my post. I meant to say that the fallacies aren't that the tactics never work. Just that they can stop working without you really realizing it. A heuristics that can lead you down a dead end.

By all means, keep doing it if it is working for you. But don't confuse it as good advice. And stay vigilant.


Products exist in human reality not some science paper. There are no absolute truths, everything dead-ends eventually. It’s like trying to prove that one set of genes is better than another for future survival - an impossible task.


This belies a belief that science doesn't reflect the real world. It absolutely does.

Again, it may be working in your case. Argument to authority can go a long way. Even ad hom attacks often exist due to a "smell" of the person speaking. It is not, however, logically sound and can easily lead to unsupportable positions.

So, take care. And realize that a lot of the damage of poor practices may be tangential. For example, a belief that the real world can not be described by science.


Isn't this problem also an issue when people talk about transferring what they learn from one test to another test? That is frequently cited as a benefit of A/B testing.


Simple, non-evil answer: because it would severely impede spam prevention. This extensive writeup has details: https://moderncrypto.org/mail-archive/messaging/2014/000780....

Long story short, to control spam, you basically have 2 options:

1. Control who can send messages 2. Inspect message content 3. Impose costs

Signal and WhatsApp chose (1). E-mail (and probably any federated protocol) requires (2) or (3), and there is no universally-adopted (3) for e-mail.


Their business model is to mine your mail. Spam censorship is just a feature that happens to fall out.


Google hasn't used email contents for ad targeting in a couple years: https://blog.google/products/gmail/g-suite-gains-traction-in...


Just call it "sentiment analysis". Forgive me if I don't take their words at face value.


Yes, publisher profits - especially for places like Elsevier - are ridiculous, even for the costs involved.

But it's easy to overlook important costs in the publication system. One, as others have observed, is editors (and some administrative staff). Their value is certainly up for debate; I have had colleagues ascribe significantly more value to them than some of the other comments here.

Another, though, is archival and continuity of access. The publisher I am most familiar with is the ACM; they have contracts with archivers in place so that should they go insolvent or otherwise be unable to continue providing access to the published papers, these firms will take their archives live so the work remains accessible. By the very nature of the business, the fees they collect now need to be sufficient to keep the work accessible in perpetuity with no further payment.

As with other societies, ACM's journal revenue also goes to fund conference development, outreach and advocacy activities, student grants, etc.

There is a lot of rent-seeking in scholarly publishing, even (in my opinion) from scholarly societies. I personally believe that many of our needs could be better met by investing the funds we currently spend on commercial publishing in university libraries and rehoming the scholarly publishing enterprise there. However, sustainable open access is not as simple as just running a web server.


If the machine (1) has soldered-on RAM (preventing cold boot attacks) and (2) the portions of the OS that run prior to user authentication are sufficiently secure, then it really doesn't seem to be a problem.

Last I knew, Windows does not like to let you enable this mode in a machine with removable RAM that don't have compensating security features.


And also no Thunderbolt/Firewire, and/or has an IOMMU and the OS uses it.


I'm sorry, what?

Windows 100% allows you to use TPM + bitlocker and secure the keys on AD on any sort of computer, regardless of removable ram or not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: