Interestingly this pattern doesn't work with pure research: the Institute for Advanced Study (IAS) in Princeton hosted Einstein, Hermann Weyl, John von Neumann and Kurt Gödel, but there is little important work done in there.
My use case is to cut segments from one video file and append the segments together. This way I can trim ads or parts I don't want.
The aged virtualdub does this as long as the container is avi (the codec could be avc/h.264 etc). I used avidemux to convert mp4 to avi without reencoding (just change the container format), then use virtualdub to trim it.
Actually I just write time stamps in a text file then use a groovy script to generate a script can be read by virtualdub, run virtualdub with the script to do the trimming.
- hash code is difficult to read for human. Urls under some hierarchy share some common patterns, it also bear some meanings. All hash code will look same and have nothing to hint on the content.
- You have to copy it, almost impossible to type it, or compare two visually similar string.
- You may end up with some link shortening service for hash code, but you can use link shortening to solve the portable file host problem already.
The Merkle Trees can solve some problems, but I don't think portable urls are the right one.
I agree 100%. Also, Merkle Trees would only benefit static content uploaded to the Internet. Updating any dynamic content would constantly generate a new root hash meaning a new URL with each update to that specific content. It's just not a good option for anything other than static content in general.
One place I could see it being valuable is with an online archiving type service like Archive.org where the content doesn't change except when a new snapshot of that content is recorded displaying any changes made to that content.
The way this is done in dat (via the hypercore module) is that the share key is a publickey, the merkle tree is signed with the private key. Updates can stream in on the log, and only the keys in the tree that have changed need updating. The history is stored in the log so you can retrieve any previous version.
I actually really like the idea of updating new hashes and etc. While I don't disagree with your statement, I definitely feel it's a challenge worth looking into.
Why? Well a lot of what I think is touted on the IPFS homepage, but to put it in my own words, I've become dismayed over how mutable the web is. It seems to entirely benefit those who seek to lie and disorient. Yet, if something from an "honest person" leaks onto the internet (nude photos, credit card, email, etc) it's nearly impossible to remove it.
This has less to do with the technology, and more to do with human nature. Facts are easy to mutate and spread misinformation about. Pages can be edited, blocked, DDOSed, etc. Yet often leaks of information that small actors, i.e. I upload my secret key, is basically permanently stolen on the web. So I feel like the mutable web is all cost to the public, with no benefit.
I feel/hope that IPFS and IPNS can allow for software to present a normal Reddit-like experience, where users never deal with weird hashes and etc, but underlying it all is an immutable and entirely audit-able paper trail. Information is key in this day and age, and if we can have an immutable web with no UX loss, I think it's a boon.
As it stands, people have identified trends among bad actors on the web - such as politicians botting Reddit to sway opinion, but the trail goes fuzzy quite quick when the entirety of the bots content can be deleted, mutated, etc.
I don't see how content addressing helps with leaks. If I have some stolen data I can distribute it with padding on the end so it has a different address faster than you can push out blacklists.
It doesn't - my point was that leaks (as in, anything of mine that gets out) already tend to be permanent on the web. Yet, news sites, botters on social media, etc - all change their content with no paper-trail for us to follow.
Anything I leak is basically permanent on the current web. If that's the case already, why would I want a mutable system in place where people can edit what they said? Alter what was posted, alter votes, take ddos content, etc.
Editing is easy, fwiw - content hashes are immutable and are not to be used for mutable content (obviously). That's why IPFS has IPNS - so we can have mutable pointers to data.
Or you can go one step further, use third party server to provide the link, because it's a burden to maintain your own server for many people. "Your server" is most likely somebody else's server anyway.
That will be google voice for url. The OP's method is use people's name as phone number.
Another problem with OP's method is that it's difficult to read for human, and very difficult to verify by eye. There is also no common parts for files organized in same hierarchy.
Yes, the chance that the hash will be broken is much higher than the chance of collisions occurring randomly. I'm just responding to "hash functions only guarantee no collisions to a high probability." People really underestimate how strong that probabilistic guarantee is.
As long as the hash function remains unbroken, untrusted sources can't screw with you.
Hash functions tend to be broken gradually and publicly, and we migrate to new ones as they start to look shaky. It's theoretically possible for someone to privately break a function that everyone else thinks is secure, but it would be an extremely impressive achievement since lots of full-time cryptographers work on breaking these things and publish every little bit of progress they make.
But does every networked device have ethernet these days? My laptop doesn't, nor does my phone. There are some contexts where Ethernet is more appropriate, but that doesn't mean it's more appropriate everywhere.
> Pushing progress is not an excuse to force transition with cost and effort burden on customers.
You can't make progress without doing some of that.
> But does every networked device have ethernet these days? My laptop doesn't, nor does my phone.
Every stationary or semi-stationary networked device should probably have Ethernet. A desktop or laptop should; a phone shouldn't (because it's mobile). It'd be nice if IoT thermostats and the like were wired, but that would require homes to have Ethernet-over-power or Ethernet runs in the walls or something.
WiFi is inferior to Ethernet, except when mobility is necessary. So for mobile devices like phones, it's not needed. For tablets, probably not (but imagine if your charging cable could also carry fast, reliable networking to your tablet, so you could have a better experience while reading or watching TV, but still be able to get up & go).
There are several iterations of most popular input methods in mainland China (Cangjie is only used in HongKong area):
1. Numerous attempts were made by many people, try to encode thousands of Chinese characters in keyboard. The most popular one is Wubi, which can describe single character in 4 keystroke so you don't need to select from options. It's used by professional typewriters, and many regular users spend a lot of time to learn the system.
2. pinyin IME was improved, provided better prediction and word input(input a multi character word directly, instead one by one), it's acceptable even you need to select from a list, and you don't need to learn a system. Most casual users used the one provided by Windows Chinese version.
3. Whole sentence IME appeared, which was trained with Chinese text corpus, try to predict a whole sentence when you just input the pinyin for all of them. This proved to be short-lived since you still need to adjust several places in many cases, even it can get 100% correct in 30% cases.
4. All major IT companies start to develop their own IME as a method to collect user input and provide an entrance to their product.
5. An old IME: double pinyin start to gain more popularity. It's still pinyin, but you can use 2 letter to encode the whole pinyin instead of 4 or 6 letters. All newer IME can switch between different encoding methods, like pinyin, double pinyin, strokes.
6. Senior users tend to just use handwriting in smartphone.
Modern Chinese IME's use the pinyin-based predictive input methods (#1) as the primary method, together with a wubi-style method (#2) as a secondary input method. E.g. in the Sogou IME, if you type the 'u' (which can never begin any pinyin syllable), you can then type letters to denote strokes or common components, such as h for heng 一 , s for shu 丨 , p for pie 丿 , k for kou 口 , s for shui 水氺 or 氵 , etc. Can be useful to distinguish between characters for commonly used syllables such as 'yi', 'ji', etc.
Regarding #5, I found it's difficult to switch from normal pinyin to shuangpin (double pinyin) in one go, and none of the shuangpin methods or software provide staged switching. E.g. a shuangpin IME could allow me to use 'i', 'u' and 'v' to type 'ch', 'sh' and 'zh', but continue to use the existing vowel combinations (e.g. 'ue', 'ao') for stage 1 pinyin/shuangpin typing, and then offer at stage 2 shortcuts for some of the vowel combos only when I had gotten into the habit of the stage 1 shortcuts. Of course the choice of shortcuts available would then be constrained by the staged learning path, but because there's so many different shuangpin IME's available, each with differing key assignments and none of them standard, there's probably still room in the market for a new key assignment that caters to such staged learning.