Hacker News new | past | comments | ask | show | jobs | submit | dannytatom's comments login

I've enjoyed using it for free, but not sure it's worth the $10/mo yet. When it works great, it's a nice-to-have for speeding up development but has yet to give me anything I wouldn't be able to just write myself. And when I wish it would give me the answer to something I don't know how to do, it spits out something very wrong.

Also feels kind of icky to train on open source projects and then charge for the output.


> train on open source projects

To be specific, the FAQ states: "It has been trained on natural language text and source code from publicly available sources, including code in public repositories on GitHub."

Some have raised concerns that Copilot violates at least the spirit of many open source licenses, laundering otherwise unusable code by sprinkling magic AI dust... most likely leaving the Copilot user responsible for copyright infringement.


Yep. The only reason it hasn't been utterly dogpiled by lawyers is that far fewer people care about code than other forms of IP. If I made an AI assistant called PhotoStar to help with digital art and it just attaches Big Bird's face onto a character in my children's book I'm going to get sued. "Hey now, I just hit paste, the software pressed copy by itself" is not going to hold up.


Or the fact that you grant GitHub an implicit license as outlined in the ToS.


GitHub has never asked for representation to provide an unlimited-rights license to GitHub themselves for any purpose. Further, the person posting GPLed code to GitHub is not necessarily the only or sole copyright holder, and GitHub has never represented that there was a problem with this.


GitHub isn't liable. That's been established in court with regards to training AIs. Who is liable is you who may or may not have the legal right to use the code CoPilot spits out for you.


It seems like this space will open up all sorts of interesting novel legal questions.

It is possible to provide CoPilot with a sequence of inputs that produces some of the input, which was copyrighted. Let's say you want to help people violate copyright, so you as a third party distribute a script that provides that sequence of inputs. Who's violating the copyright there?

Alternatively -- it is apparently legal to produce a clean-room implementation that duplicates a copyright implementation. Supposing you were to use a tool like CoPilot, which has just been trained on that copyright implementation. Is your room still clean? You might even be able to get it to spit out identical functions!

Or, if you have a ML algorithm which has been trained on leaked closed source code, and it is sufficiently over-fitted as to just provide the source code given the filename or the original binary, who is violating copyright when this tool is used? If it is just the end user, then this seems like a really convenient way to launder leaked closed source code.


I don't think it's a clear cut as you make it out to be. Tortious interference is a common law remedy that might make Github/MS liable.

If I induce you to break a contract with someone else they can come after me for damages.

For example in this case, there are developers who have created GPL code. That code was licensed to some other developer. Github then encouraged people to upload git copies of the GPL code onto github where it was put into the model. That model contains the copyrighted materials and isn't coming with the necessary notices. The output of the model can be code that is a direct stand in for the copyrighted work. Thus Github have become a party to breaking the license even though they themselves never agreed to the GPL.

In addition Github are encouraging (They are advertising it and making it available broadly) other developers to copy that code and use it in their project. Again that's encouraging an action that breaks a contract. Github is well aware that this is likely happening and they continue on. Thus they might be liable. You also might be liable.

All of these things can and likely will be argued before courts but it's not at all one sided.

> That's been established in court with regards to training AIs.

What are you basing the certainty of this statement on? The case law I have seen around this is pretty spotty. Cases around training on copyrighted materials have predominately been about the input, and not the output. With the final output usually being controlled by the model owner. For example Google obtained the books they scanned legally then used them to produce google books' index. There are some major differences.

- The books were purchased, meaning they got a license to use the book. There's for sure code in the model that Github does not legally have the right to use. They are aware of this. Making the input more shaky for github. - Github is making a direct profit off of this service. It's a revenue generating enterprise. That's important since it raises the bar of what they can be expected to do.

There's been nothing that goes to the supreme court yet; it's all per circuit and not settled case law. Also this gets WAAAAY more complex when we start talking about outside of the US and isn't decided at all.

These things are complex and likely you need your lawyer to advise you with any real questions.


> The books were purchased, meaning they got a license to use the book.

This may be a bit nit-picky, but I don't think that is correct.

Most books I've seen don't say anything about granting a license so there would be no explicit license that comes with them.

Maybe you could find an implicit license if normal use of a book required a license but it does not. Copyright law allows all the normal uses of a book without requiring permission of the copyright owner. You only need a license when you want to do something that requires permission.


I should have been more explicit; You are completely correct.

I was saying that there's some implied license after first purchase. I believe that was part of the court's decision. Paying for a book (or a library paying) gives you implicit rights to fair use. Github's copies of code were not purchased. They were given by sometimes third party.

So there's likely some room to argue that fair use rights are different enough between previous cases and github.


This has been explained many times - you can check word for word the output is original. All it takes is a bloom filter trained on the Copilot training set and an ngram extractor.


Yes, and you'll be fine if you do. The problem is you might not bother.


Alpha-equivalence be damned!



Fortunately, it can also generate high-quality completely novel characters, every bit as lovable and unthreatening as Big Bird:

https://imgur.com/a/ppeclPL


But if you made DALL-E and it just remixes images sourced from a broad scan of the Internet, filtered through several layers of machine learning indirection, you're all good.


Sure, if it's remixed to the point where most people don't go "hey that's Big Bird!" CoPilot doesn't, or at least doesn't always, like when it just copied Quake's fast inverse square code with the verbatim comments including profanity. Using CoPilot to create commercial code opens the coder to significant liability if there's enough money at stake.


That piece of code had duplicates in the training set making it prone to memorisation. Almost all generated code is original.


Almost all generated code is original

Good, you will almost not be liable for infringement.


Let's wait for the first big Codex infringement scandal to erupt and then I will start worrying about it.


Just argue that you subcontracted that code to Microsoft in good faith for $10/month and pass on the lawsuit to them.


I still can't believe they trained it with open source code, and didn't have some tag system to a) exclude based on licensing, and b) autoinclude licensing, or at least warn about it before applying code. Especially when many cases were shown of it line by line writing code from the same exact codebase.


Another concern is that nearly every stackoverflow answer or wikipedia article that isn't a trivial algorithm tends to be buggy at its edge conditions. Most of them look like they were submitted by college students and not experts.


Remember when we believed that experts were over because the wisdom of the crowds would reign supreme?

Been a hell of a decade, hasn't it.


The "wisdom of the crowds" doesn't mean what many people think it means.

The wisdom of crowds works best when:

1. participants are independent (otherwise you may get failure modes, such as "groupthink" or "information cascades")

2. participants are informed, but in different ways, with different opinions;

3. there is a clear, accepted aggregation mechanism, where individual errors "cancel out" to some degree

I view the topics in James Surowiecki's book (or the Wikipedia summary of it, at least) as required thinkinpg for everyone, preferably synthesized with a study of statistics and political economy.

In particular, the Wikipedia article's section on "Five elements required to form a wise crowd" is a slightly different slicing of the required elements that I offer above.

* If you read that section, trust is listed. I, however, don't see trust as a necessary condition for a "wise crowd". Trust is often useful (or even necessary) when a collective decision is used for governance, decision-making, and policy.


When the wisdom of the crowds is all easily accessible, the hard part becomes curating.


This is legit. While it seems it takes forever to bring this kind of stuff to trial, it will be an interesting case for sure. Especially in the broader more general sense.

AI is just recomposition of existing snippets of code, art, text, music, etc. Does an AI fall under fair use? What happens when an AI produces something too similar to an existing work or trademark. I know the computer won't get sued, the owner/user will. But still, it's a hard problem.

Even if Copilot was initialized with snippets from Open Source Software (exclusively), it doesn't mean that copyright infringement isn't a concern.


> AI is just recomposition of existing snippets of code, art, text, music, etc.

It's not random recomposition, which is worthless. It's useful recomposition, adapted to the request and context. It adds something of its own to the mix.


Not to mention that just because the code is public, doesn't mean you can use it however you want. You can publish code and still retain copyright. Wonder if GitHub looked at the license when they gathered the data for the model.


It seems unfortunately clear that generative ML as typically practiced falls under fair use of even the most restrictive licenses or lack thereof (e.g. a training set including disney movies without disney’s permission). Some people say that’s great and it’s legal hooray, but I would love it if the law caught up and added requirements to the models trained this way. If you benefit from other people’s stuff without their permission then you ought to have to give back in some way.


What is actually crazy is having copyright/patents/whatever apply to mathematical structures and code, and be retainable for long, it's rent on ideas, such a ridiculous concept.


Copyright and patents are very different. I think the general consensus among developers is that software patents are silly, but copyright on source code is very important.


If you can't prove your code was stolen you shouldn't have a claim. And Codex should just skip code that exists in the training set. All that remains is creative code.


Would a cartoon about Mickey Duck and Donald Mouse be infringing?


You can work on the definition of "similar code". It can be a separate model on its own. Use human judgements to learn it.


It’s hardly different from reading those projects yourself and learning from them.


Learning from them would be fine, reproducing them as-is without abiding by the license is not and that's where the difference lies.


Depends on your budget of course, but I don't think it's worth $10/month. I pay just a little bit more than that for an entire IDE. The problem with Copilot is that it's USEFUL for boilerplate code and when you need a lot of copypaste "coding" (think APIs, controllers, etc... basically shifting data around the place), but any time you need to actually code something with some actual algorithmic logic behind it, it's little more than a distraction, and often even a really problematic one, because if you let it, it will happily suggest things that look OK on the surface, but are almost always (and I really mean most of the time) wrong, buggy or otherwise incomplete. You can't realy on it. It's like a kid (I wanted to say a "junior programmer", but it's not anywhere near that level) you can offload some chores to, but you always have to check on it and what it actually does. Fine if all you need is to wash the dishes... more than that and you're asking for trouble.

When I'm in the flow, trying to solve some algorithmic problem, I always turn it off because the BS suggestions coming from its little "mind" actually slow me down and mess with my focus. Which all makes sense when you realize what it ultimately is - a philosopher, as opposed to a mathematician.


I very often will let it suggest its thing and then tweak it to work how I want. It's like super auto-complete for me. If I can't remember how a specific pattern goes for some library, I'll let it write it for me, and then double check it to make sure it's doing what I want. That's still faster than me going to check the API and writing it all out by hand.

Most projects are 90% BS glue code and 10% actually interesting code. I don't mind only having help with the 90%.


I used copilot yesterday because I wanted a random 10 character long string and was like. Ahh I don’t have the brain power right now to think of this. And remembered I had copilot. So I enabled it. Wrote a comment. And it generated ~10 lines that solved my problem. Tweaked a little bit and rolled with it.

It helps solve the boring simple shit so I can focus on the interesting bit.


> Most projects are 90% BS glue code and 10% actually interesting code. I don't mind only having help with the 90%.

Yea, that makes sense, I agree with that. If your use case is skewed more towards "BS glue code" as you say, you'll find more use out Copilot. Then $10/month can be fair, cheap even.


This seems pretty reasonable to me / resonates w/ how I might use it.


> Also feels kind of icky to train on open source projects and then charge for the output.

Yeah, this feels like the same nonsense that scientific journal publishers pull. If your product only has value because of what we made, it's completely unfair to not pay us for our work and then to turn around and charge us to use the output.


Also its users might be violating the GPL.

https://www.infoworld.com/article/3627319/github-copilot-is-...


How can the user be violating the license, not the distributor? If I give you a binary that gives you a Disney movie, it's not you violating the copyright, it's me. The copilot itself is violating the copyright, not its users.


"Your honor, I had no way of knowing that this mysterious device I purchased that manufactured shrinkwrapped Disney DVDs was violating copyright."

"Intent is not relevant to copyright infringement liability."

"But your honor, I heard on Hacker News that it was."

"I find you guilty."

"But your honor, copyright violation is usually a civil issue, and 'guilty' is a criminal trial concept."

"Well, I also get my legal training from Hacker News."


If you take the Disney movie the binary gives you and then pass it on, you're in violation even if the company distributing the binary is also in violation. You can sue them for damages that result from you being sued but good luck.


If you're making software just for your own use, you're right. But most people who make software do distribute it.


Where I live, copyright literally means the right to copy. Which means using a binary that gives/produces/generates a Disney movie when you do not have rights to that movie, you violate copyright by virtue of copying the IP into your computers memory and then onto the view buffer of your display. Also if the binary manages to do that without actually violating copyright itself it might even be legal. There's other laws that could be used though, I forgot what they got Napster on but they had something to shut it down, same for torrent sites like Piratebay.


If the copilot users then distribute the source they got from it, they are at that point violating copyright.

E.g., if I take that Disney movie, incorporate it into my own movie, and distribute it, then I'm also violating copyright.


The user of Copilot is a developer - the distributor.

And you might argue that Copilot is also a distributor.


Yes. Even if it may be permitted under some licenses, training models off millions of developers' code and capitalizing on those models goes against the spirit of open source software. I'd expect nothing less from Microsoft.


> has yet to give me anything I wouldn't be able to just write myself

Sure it has: Time.

In terms of economics it's really simple: Does Copilot free up more than 10$ worth of your time per month? If the product works at all as I understand it (I haven't tried), the answer should be a resounding "yes, and then some" for pretty much any SE, given the current market rates. If the answer is no (for example because it produces too many bad suggestions which break your flow), the product simply doesn't work.

There might be other reasons for you not to use it. Ego could be one. Your call.

> Also feels kind of icky to train on open source projects and then charge for the output.

I don't know why it would feel any more icky than making money off of open source in other ways.


It is completely different than using open source programs to make money. Many open source licenses explicitly require any derived work to maintain the copyright notice and a compatible license. If I use github copilot to create a derived work of something somebody else published on GitHub, I have no idea who wrote the upstream code or what license they made it available under. The defense for this is the claim that GitHub copilot doesn’t create a derived work, since the code it produces is very different than anything upstream (this is claimed in the original paper from openai). However, many people have found examples showing this to be a questionable or wishful-thinking claim.


Lack of training data is obviously not gonna be a linchpin in this project, no matter how reproachful the hs crowd looks upon copilot in regards to oss licensing. Even if we are prepared to dub the copilot team liars (bold move, good luck in court) there is always gonna be enough code to go around to make this thing happen regardless. Rumors are microsoft could chip in some.

In addition, the idea of "derived work" in code snippets is, quite frankly, nuts. There is only so many ways to write (let's be generous on the scope of copilot) 25 lines of code to do a very specific thing in a specific language. If you have 1000000 different coders do the job (which we do) you'll have a significant amount of overlap in the resulting code. Nobody is losing sleep because of potential license with this. Because that would be insane.

I have noticed that upholding oss licensing (at least morally) is kind of a table manner on hs. That's fine, but this is some new level of silly.

It's also not gonna persist, because no matter how much we love our oss white-knightedness, we love having well paying jobs more.


Having used it quite a lot I'm not sure it does save me $10 of time per month. At least as often that it generates usefully correct code it generates correct appearing but actually totally wrong code that I have to carefully check and/or debug.

It's quite nice not to have to type generic boilerplate in sometimes I guess but it's very frustrating when it generates junk.


Same experience for me. Checking the code it generated, and the subtle bugs it created which I missed until tests failed, made it at best a net-zero for me. I disabled it after trying for 2 months.


You lasted long than I did! Disabled after a few days.

I think it really depends on what languages you use though. If you use something like Kotlin where there's really almost no boilerplate and the type system is usefully strong, the symbolic logic auto-completion is just far more reliable and helpful. If you're stuck in a language where there's no types, and there's lots of boilerplate to write, then I can see it may be more helpful.


I turned it off a week ago because I found it was wasting time when everything it generated required going back to fix issues.


> I don't know why it would feel any more icky than making money off of open source in other ways.

For me, this entirely comes down to the philosophy of how a deep learning model should be described. On the one hand, the training and usage could be thought of as separate steps. Copyrighted material goes into training the model, and when used it creates text from a prompt. This is akin to a human hearing many examples of jazz, then composing their own song, where the new composition is independent of the previous works. On the other hand, the training and usage could be thought of as a single step that happens to have caching for performance. Copyrighted material and a prompt both exist as inputs, and the output derives from both. This is akin to a photocopier, with some distortion applied.

The key question is whether the output of Copilot are derivative works of the training data, which as far as I know is entirely up in the air and has no court precedent in either direction. I'd lean toward them being derivative works, because the model can output verbatim copies of the training data. (E.g. Outputting the exact code with identical comments to Quake's inverse sqrt function, prior to having that output be patched out.)

Getting back to the use of open source, if the output of Copilot derives from its training data in a legal sense, then any use of Copilot to produce non-open-source code is a violation of every open-source licensed work in its training data.


I want to love github co-pilot, but its just not there yet. For trivial stuff it's great, but for anything non-trivial it's always wrong. Always.

And my problem is : Time.

Cycling through false positives and trying to figure out if it's right costs me way more than $10 a month in productivity.

I cant wait for better versions to come out, but right now, no.


But I don't get paid on a piece rate; the amount of time I spend working is constant. Anything that increases my productivity just means I get more work done. (Others may differ, but I know from experience that I like to keep to a fixed schedule.) And that's mostly benefitting my employer, not me, so it seems like something my employer should pay for, if they believe in it.


Yeah it's just practicality for me. There is software I pay a lot more for that I use a lot less.

$100/year is a steal for the amount of tedious code copilot helps me with on a daily basis.


I could also make a mistake due to Copilot which takes me time to fix, and then I end up spending more time checking code where I previously used it. It has similar pros/cons than copy/pasting


Given the cost of the infrastructure needed to run those large language models, it's very likely that Microsoft is still operating copilot at a loss. I don't see an issue with it being a paid service as it is a costly service to provide.

What I pity however is that there's no free tier for hobbyists as paying a 10 usd monthly subscription wont make sense when you only code occasionally. For professionals using it everyday, 10 usd / month is inconsequential.

I don't think that would have costed them much more to offer a free allowance to cover say an average coding session of 8 hours per month.


GitHub Pro is $4/mo and includes 3000 minutes of CI compute per month (private repos), among all the other features. You’re not going to use 7500 minutes worth of compute a month with Copilot. I’ll certain pay up, though.


CI runs on CPUs, Copilot runs on GPUs. Waaaay different. Especially in this age of cryptocurrencies and chip shortages.


It’d be nice if they made it free if the upstream repo is published publicly under an open source license. They have all that info already.


It’s free for open source maintainers.


Open source maintainer here. No, it's not.

100% of what I do is open source. It's used by millions.

It's free for maintainers of "major" open source projects. I'm not sure what a "major" open source project is, but it's clearly not what I do. The only way to know if your open source project qualifies is to try to sign up. If it does, you're given a free option.


What repo do you maintain that is used by millions?


I don't connect my real-life identity to my personal identity.

I am the primary author (but not current maintainer) of an open-source project which is reported to be used by over 100 million people, according to (flaky) statistics kept by the current maintainers. That's around 1% of the people in the world.

I don't trust the current maintainers to be honest with numbers (there are lots of ways to estimate numbers of users), but it's definitely in the millions, and it's a project you (and most random people you'll meet in tech, and many outside of tech) will have heard of.

I am currently working on earlier-stage projects, which have smaller communities, but 100% of them are open-source.


Agreed. At the very least, I was hoping they'd bundle it with the GitHub Pro subscription for individuals rather than as a separate product.


Totally agree. I was expecting to get this feature as part of my Pro subscription.


I was expecting the same.


> Also feels kind of icky to train on open source projects and then charge for the output.

"open source is great, except when it's used in a way I don't like"


I don't see the use itself as a problem, but rather that the result is not treated as a derivative work of the input. If I train it on GPL code, the result should be GPL, too.


This is kind of like saying that any programmer who has ever learned something from reading GPL code can only use that knowledge when writing GPL code. It's not literally copying the code. The training set isn't stored on disk and regurgitated.

Also - there is logic in copilot that checks to make sure it is not suggesting exact duplicates of code from its training set, and if it does, it never sends them to the user.


But Copilot is not a programmer, Copilot is a program. Slapping the "ML" label on a program doesn't magically abdicate its programmers of all responsibility as much as tech companies over the past decade have tried to convince people otherwise.


I really dislike this false equivalence between human learning and machine learning. The two are significantly distinct in almost every way, both in their process and in their output. The scale is also vastly different. No human could possibly ingest all of the open source code on GitHub, much less regurgitate millions of snippets from what they “studied.”


> This is kind of like saying that any programmer who has ever learned something from reading GPL code can only use that knowledge when writing GPL code. It's not literally copying the code. The training set isn't stored on disk and regurgitated.

I wouldn't put any hard rules on it, but it does seem very fair for programmers who have learned a lot from GPL code to contribute back to GPL projects. I have learned from and used a lot of open source software so whenever possible I try to make projects available to learn from or use.


Read up clean room design and on the IBM bios lawsuits from the 80's and 90's just seeing proprietary code can be a violation

Why is it different if we slap a "ml" lable on it


I guess if you trained on GPL code that should be true for your code as well.


It would be great if that were the case, but unfortunately it isn’t. We’ll need new laws for that.


Yes. It is completely valid, understandable, and reasonable to have a variety of different feelings and views about how specific code and specific licenses are used.

This is particularly the case when we see the emergence of new technologies that use it in different ways. Different people may have a wide variety of equally valid views about how it is incorporated into that system.

There's nothing inconsistent, confusing, or complex about those views.


I think the issue is not that it’s trained on open source code but that it’s trained on code whose licenses may not permit it. If you license your project in a permissive way then I don’t see a problem.


Most "permissive" licenses still require attribution.


Are there actually any licenses which do not permit training an AI model on the code?


(IANAL) It's a tool, transforming source code. The result thus seems like a derivative work; whether you are or are not allowed to use that in your work depends on the originating license. (And perhaps, your license. E.g., you can't derive from a GPL project and license it as MIT, as the GPL doesn't permit that. But to license as GPL would be fine. But this minimal example assumes all the input to Copilot was GPL, which I rather doubt is true, and I don't think we even know what the input was.)

I think there might be some in this thread who don't consider these derivatives, for whatever reason, but it seems to be that if rangeCheck() passes de minimis, then the output from Copilot almost certainly does, too. That a tool is doing the copying and mutating, as opposed to a human, seems immaterial to it all. (Now, I don't know that I agree with rangeCheck() not being de minimis … and yet.) Or they think that Copilot is "thinking", which, ha, no.


Open source licenses aren't a free-for-all. Many have terms like GPL's copyleft/share-alike or the attribution requirements of many other licenses. If copilot was trained on such code, then it seems that it, and/or the code it generates, violates those licenses.


How can it help you to speed up development but not be worth 10$/month. Your hourly rate can’t be that low.


It's great when it works, and can also be costly when it doesn't or when you blindly trust it.


Which is just another way of saying that it doesn’t really work, except perhaps for party tricks.


For me it works wonderfully, when you choose to use it. If you are just blindingly accepting every suggestion, you're going to have a bad time.

You also have to (slightly) change your flow to get the most out of it, which I know is a deal breaker for many.

I absolutely love it. It's not going to write good code for you, but for an autocompleter it is amazing.


The fact that GitHub charge only 10$/month suggests that they themselves don’t believe in their product. Because if it would actually work, i.e. speed up software development by, say, >10%, developers should be happy to pay 10 times as much or more.


This is a rather silly argument... by that logic since using the Adobe suite saves me at least it has a dozen hours every month I would be happy paying $500 a month for it.

There's a limit to what individuals are willing to pay for a subscription service irrespective of how many hours it saves you. Now if we're talking enterprise and bulk licensing then that's a separate issue.


This is a rather rude response… Your comparison with Adobe suite has a flaw, but I have no interest in exchanging ideas in this tone.


Matches my experience. I legitimately like it for quick boiler plates; it's like a better snippet engine. But Paying for it...


It's worth it if it saves you a few minutes every month.


Only if it saves you a few minutes every month in a "net" sense. If it saves you dozens of minutes every month and then also costs you dozens of minutes every month in hard-to-predict ways, it's hard to judge either way on it.


They still have to pay for servers and maintain the model itself. A neural network isn't just the data -- training and commercializing it (testing, QA, etc) is a lot of work.

You wouldn't have an issue with someone making money by using open source software (like a website that is hosted on a server running linux).


I went to see the pay URL and it said I was eligible to get it for free. Not sure if that works for some people who contrib to other OSS repo's, but I was about to give up on it when I saw I didn't have to pay, so might be worth checking.


Also, $10/mo is not so bad but I am not in the place right now for more subscriptions. I am in the process of stopping several at the moment.


Same here. With prices rising everywhere and a salary of ~40k euros before taxes (which is normal in IT in many EU countries if you don't work for big tech) I hardly have room for another subscription. People here are too quick to say "what is $10 on a $80/hour salary?"


Yea...does this mean it will stop working until I pay?

It's been really nice for autofilling console logs and boilerplate code...but $10? It's a novelty that is nice when it works, but that's a steep price point for what it is, and I don't see that changing any time soon.


People in the technical preview get a 60 day free trial, but yes, after that, you'll have to pay.


> Also feels kind of icky to train on open source projects and then charge for the output.

The business model for most of the Internet is to bait people into using things for free and then monetize them without compensation in some roundabout way.


> Also feels kind of icky to train on open source projects and then charge for the output.

How would you feel if they just provided the software without the model, assuming you could train it yourself on open-source code in an instant?


I don't know enough about how GTP-3 and ML work to really answer this, but I think I'd be fine with what you're saying if I understand the question. If they provided (and charged) for the infrastructure, but the model was FOSS and community-driven, it would be less icky I think.

I just don't like the idea of taking people's work (without asking or checking licenses) and then selling it back to them. It'd be like if Stack Overflow decided to start charging to see answers and not asking or giving a split to the person who gave the answer. I realize they aren't just copy/pasting so not a perfect parallel, but still.


Technically anyone could use those same open source projects and provide an open source solution, or paid solution as well. I do feel how you feel though it's a little off-putting.


The machine learning models are not open source themselves, so you can't just do this yourself with existing open source projects.


+1 on ickyness


I already have it in my visual studio code. I do like it. Will it stop working for me now?


dreaming for the day blender has more support for 2d animation. software like toon boom harmony are way too expensive for hobbyist work and there aren't many alternatives for that paper cutout type of animation.


i hope they eventually add more support for 2D in the style of toonboom or after effects. being able to draw with vectors, create puppets, and then animate them. 2d animation is sorely lacking when it comes to open source software.


i got decisor, which i think is a word?

https://www.thisworddoesnotexist.com/w/decisor/eyJ3IjogImRlY...

wiktionary says its obsolete but i think that still counts as existing? lol https://en.wiktionary.org/wiki/decisor


Isn't that what you call an incisor that falls out?


you can make a creeper farm to automate gunpowder, that part is easy. the hard part is that sand isn't renewable. a lot of tech servers (scicraft being the biggest) just use tnt duping to do it until mojang makes sand renewable. i think that's the only "cheating" they do and it's kinda understandable cause otherwise it's impossible to get a lot of tnt.


> the hard part is that sand isn't renewable

If you've opened The End, there's a way to use the End Portal to duplicate sand. In one of my worlds, I have that feeding into a 32 furnace super smelter with a tree farm nearby for fuel (although I could build a carpet duper for that, actually.)

https://www.youtube.com/watch?v=wfeGyXJOCBw


If you dupe carpet you can use it as fuel (3 carpets per smelt item) so it's quite inefficient. Still, as an unlimited item... As long you can supply the super smelter quick enough with fuel efficiency isn't that important.


good money for a hobby tho if you can luck into a playlist


whatever the answer, it has 0 to do with the topic and is just parent comment showing their bias for no real reason.


i miss that site, hella nostalgia just hearing the name.


that doesn't mention the `without noticing` part, which is the creepy not good part. sounds like it's just saying the administrators get a list of currently running meetings and can join them if they want without being invited. doesn't say they're hidden or anything.


well the article says admins can join "without warning" (i.e., without notice) and without consent. Although the article does not say whether the admin appears on the user list or not, lets assume the admin does show on the user list. Then, it seems in practice to be very well possible to join and attend the meeting without anyone noticing. I am certainly not constantly monitoring the current user list in my meetings, especially not in larger ones.


You are clutching at straws here. Zoom fires an alert when somebody joins a meeting. If you're not paying attention, that's not Zoom's problem. If the alert isn't surfaced very well, that's an UX issue, but not a creepy privacy invasion. And if you expect to be able to use a platform which your employer has provided without any oversight whatsoever, I don't even know what sort of problem that is. Do you also expect to be able to lock the door to a physical meeting room, on your employer's property, and use it for your own private purposes unchallenged?


So, what better UX do you propose? Playing a sound of a squeaky door?

On the other hand, I live in a country where it is considered polite to knock on doors and wait until the door is opened from the inside. And no one is straightly accused of idling behind closed doors.


I don't think it's really in competition for Bootstrap. Bootstrap is a set of common components and a grid system. Tailwind is a set of utility classes to build your own components.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: