...and consequently limited driver support. Copyleft is permissive. There's no need for non-copyleft licensing unless you want restrictive proprietary licensing somewhere or sometime.
"Limited driver support", here, means ... you don't have access to, and the right to fork, the source code?
As an old engineer, to me, limited driver support always meant: "not that many drivers". You seem to mean: "I can't fork".
Or am I mis-reading you?
Many of those of us in the new-new world of next-gen system-integrators-acting-like-new-software-product-developer types don't always have sole control over the drivers in the/our stack necessary to deliver key features to key clients.
The "hard" open source position of the copy-left crowd incentivizes old-school pragmatic management to take a "why bother" stance and instead of open sourcing 80% and getting yelled at because it isn't 100%, just go with 0%.
Which is sad. An unnecessary. The won't deal with the very real business risk that happens when you liberally treat with zealots.
Ideas for Online Codes
[Comment]
by catid posted (>30 days ago) 9:35pm Thu. Jan 26th 2012 PST
Online Codes are an unpatented approach to rateless forward error correction (FEC) codes from a 2002 paper by Petar Maymounkov. I first learned about Online Codes from a wikipedia article and since then I have been reading as much as I can about how to implement them with good performance.
Since that paper was published, Luby et al have made fantastic advancements in rateless error codes, culminating in RaptorQ last year. Unfortunately they decided to patent their algorithms, which makes their wonderful algorithms worthless for almost everyone on the Internet. So, roll back the clock by a decade and start over. sigh
Online Codes are a great place to start. I like the layered approach where there is an inner code and an outer code. After reading about a lot of modern implementations I have some ideas to try and see what will help Online Codes reach good performance:
(1) Peeling decoder. This is an unpatented approach to decoding sparse low-density parity check (LDPC) codes. It's O(N) and fast, and is pretty easy to get running.
(2) Gaussian elimination decoder. This is also an unpatented approach used to solve linear systems of equations when the equations are not sparse.
(3) Combine (1) and (2). This is an unpatented approach that provides maximum likelihood decoding in roughly O(N) for good performance. I noticed that the "Raptor Codes" monograph includes a description of this algorithm, so it must be excellent in practice. I've implemented it myself and managed significant performance (300 MB/s) before optimization. I have a few ideas for how to optimize this:
+ Reduce the number of variables needed by analyzing how data is passed between parts of the algorithm and eliminating unneeded data.
+ Profiling the application to see where hot spots are, and storing pre-processing in parts of the code that are taking up less time.
+ When back-substituting the Gaussian elimination decoded (2) symbols back into the peeling decoded (1) symbols, the check matrix is pretty dense. Borrowing a trick from windowed modular exponentiation, combinations of bits can be precomputed and then stored in a table to greatly reduce the number of XORs required. I think this can be implemented without any additional memory since at that point in the algorithm many of the received rows have been copied over to the output file.
+ Reduce the number of memory copies by waiting for memcpy() until back-substitution and using memxor() where possible to avoid the copy.
(4) Use two inner codes instead of one. The second inner code will be based on GF(256) octets instead of GF(2) like the rest of the check matrix. Since the normal overhead is low, adding just a few more check matrix rows in a higher field should help out the recovery properties a lot. This seems like a logical extension to the layered approach of Online Codes.
(5) Decode all codes at once. The outer code might be combined with the inner codes into one large matrix to solve. This would improve decoding performance to that of the maximum likelihood decoder, and should work well with the decoder I'm writing. We'll see...
(6) Since A = Decode(Encode(A)) and A = Encode(Decode(A)), the code can be made systematic by precomputing Decode(A) on the transmitter, and then running Encode() forward so that the first symbols sent are equal to the input data file. This can reduce processing time but still allows check symbols to be generated afterwards with good recovery properties. There should be a lot of room for optimization on the transmitter side too since it already knows what the output should be.
(7) Raptor codes are using some kind of "permanent inactivation" thing that makes the outer code more complicated but apparently helps with the recovery properties when the number of symbols is lower. Might be patented and unusable, we'll see..
> The only other work that deals with rateless codes that we are aware of is a forthcoming paper [6]
by Luby. Since the author declined to provide us with a copy of his paper, we are unable to compare
our results until his paper becomes public at FOCS’02. The final version of our paper will include
a comparison with Luby’s work.
>How about giving the domain to another entity that is willing to pick up the hosting and/or redirection costs?
GitLab bought Gitorious so they could shut it down, not so they could give it to someone else. That is the purpose of acquisitions within the same field.
The main reason for the acquisition is to give and communicate a clear upgrade path for existing Gitorious users. If someone wants to pay for the hosting costs to keep gitorious.org running longer we'd be happy to do that. Please email me at sytse@gitlab.com or comment here if you're interested.
>The main reason for the acquisition is to give and communicate a clear upgrade path for existing Gitorious users.
That's a pretty disingenuous thing to say when it seems like the only reason an "upgrade path" is needed in the first place is because of the acquisition and shutdown.
Edit: Even with Gitorious being "no longer sustainable" in its current form, there are other methods that could have been used (price adjustments, fundraising, etc) rather than an outright and very short-term shutdown.
Responding after the edit. Price adjustments and fundraising make sense when a project is alive and growing, but Gitorious had been seeing less and less contributions over the last few years.
I and my teams used Darcs in two startups around 2005-2007, back just before and after 2.0.
When it worked, it was pretty good. Darcs' UI was close to ideal. The interactive record prompt, for example (which git got after a while with the "-p" flag), was great. The way Darcs was able to infer dependencies between commits automatically and treat it more as a "sea of patches" rather than a linear history meant that it was very easy to work with branches.
The problem was that as our codebase grew, it increasingly did not work. The infamous exponential conflict problem was just one of several bugs and performance issues that ended up costing us a lot of money in lost productivity. The problem was that when something went wrong, you probably couldn't fix it yourself. The only people who actually understood the internal database — not just the files on disk, but how it all fit together, including the "patch theory" — were the Darcs developers themselves, and not many people except David Roundy actually understood it from top to bottom. The fact that it was written in Haskell (which, at the time, was a lot more obscure than it is today) just made things worse.
After a while we also found that the lack of a linear history had major downsides. Patch dependencies meant it was harder to cherry-pick; when you wanted to pick just one commit, it was often impossibly to understand why Darcs wanted to also pick a bunch of unrelated commits along with it, and from there on it got messy (and you increasingly risked bumping into the exponential merge bug). Git's history-rewriting tools cause more conflicts, but are ultimately simpler and easier to understand.
Darcs' tragedy is probably that its performance problems burned so many people that they gave up and left it for Git or Mercurial, never to come back. We might have stuck with it longer, if it hadn't been for the aforementioned issues. On the bright side, even if Darcs hadn't had these issues, we probably would never have had a Darcshub.com, and Git would still have beaten the competition.
Git's user interface is awful until you understand how the organic hairball underlying it is organized, at which point you can mostly "get it." Darcs' hand-holding approach to this is a great example of how to do it; Mercurial's also pretty good (for Hg you need about one tutorial and previous SVN experience, or two tutorials, to "get" it).
What Darcs really lacks is a good "Tortoise" client for seamless Windows integration. It was tried (last release 8 years ago; I feel like I can safely talk in the past tense about it) but the userbase has not been large enough to sustain it.
Dev kit is $200. Chip is listed at Arrow for ~$10, but out of stock.
I wonder if you could do some interesting analog audio stuff, like building low part count multi-speaker crossover networks, etc. Then again, for $10 apiece, it probably still makes way more sense to just put a cortex or something on there and do it all digitally.
You could market it to the analog-or-nothing crowd in the audio community. Decreasing production time of "boutique" analog clone effects pedals could make you some cash.
That would be neat— a reconfigurable effect pedal that you could download and install new patches for.
The _real_ trick would be providing the option to route the signals out of this thing and through vacuum tube stages, so you're not stuck trying to build a decent overdrive or compressor with silicon-only components.
They sell the dev kit for the current AN231E04 part, but at least the Asian distributor does not stock it (only back order), and over $200, but at least it is available from 1. If I'd just want the part, it comes in 1000 increments...
I'd have some more ideas for lab instruments, but under these circumstances it cannot really be used for product development, sadly
With all the fake electronic parts coming from china, are those companies a trustworthy long term source for such companies ? Not sure.
One common advice is to buy from authorized sources, and looking through octopart[1], arrow is the only distributor of anadgim, with 1000 units minimum quantities.
...and consequently limited driver support. Copyleft is permissive. There's no need for non-copyleft licensing unless you want restrictive proprietary licensing somewhere or sometime.