Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don’t believe that natural language is an adequate medium for conveying instructions with the precision required for many applications.

Not clear to me if the author actually uses LLMs to do meaningful work, or is speculating about how they might be used.

I've written about 2500 lines of F# for the first time in the past 1.5 weeks using ChatGPT-4 to guide me. It has been an constant back and forth, iterative process. My decades of development experience factored in heavily to guide the process. I would've have been at maybe a quarter the progress without ChatGPT, or given up entirely on F# as my language.

I don't think that iterative aspect will be eliminated any time soon for AI-supported complex, creative processes. It's no different from tweaking inputs to a Photoshop filter until your experienced brain decides things look right.

To that end you need to know roughly what looks "right" before you use an LLM. This will all become second nature to the average developer in the next 5-10 years.



Bit of an aside, but I wonder if the rise of LLMs will lead to new programming languages being much slower to be adopted.

Like you said, you might have given up on F# without ChatGPT assistance, and the main way ChatGPT is able to help with F# is because of all of the example code it's been trained on. If developers rely more and more on LLM aid, then a new language without strong LLM support might be a dealbreaker to widespread adoption. They'll only have enough data once enough hobbyists have published a lot of open-source code using the language.

On the other hand, this could also leading to slowing adoption of new frontend frameworks, which could be a plus, since a lot of people don't like how fast-moving that field can be.


I heard somewhere that ChatGPT is surprisingly good for human language translations even though it's not specifically trained for it. There seems to be just enough examples of e.g. Japanese that Japanese researchers use it to translate papers. I suspect that's largely true for programming languages too. I've had great success working with it in Clojure, even though there's relatively little published code compared to more popular languages.


ChatGPT is pretty good for translations of Japanese into English. It’s English to Japanese translation tend to sound somewhat stiff/formal/machine-generated, although it’s less prone to hallucinations than DeepL for larger texts. I expect this is because it was trained on a much larger corpus of English language texts than Japanese ones, which means the problem is not intractable.


Wouldn't you just need to publish a Rosetta stone type translation for it to be able to digest the new language fully? e.g. here is how you do this in python and here is how you do it in this new language


The crazy thing that a lot of people don’t realize is that all of that data generalizes to anything new you can throw at it. As long as there’s enough space in the prompt to provide documentation it can do it on the fly but you could also fine tune the model on the new info.


Which is what Phind tries (and mostly succeeds) in doing. LLM + Search Engine is way smarter than just LLM.


It could go the other way. LLMs might make porting code from one language to another easier which would speed the adoption of newer and more niche languages. And the future of documentation and tutorials might be fine-tuning an LLM.


I've also wondered this - including if we might see a breed of 'higher level' languages (i.e. much higher level than Python) which can then be 'AI compiled' into highly efficient low level code.

i.e. the advantages of an even-higher-level python that's almost like pseudo-code with assembly-level speed and rust-level safety, where some complexity can be abstracted out to the LLM.


I disagree. Chatgpt is helpful here because f# is a paradigm shift for this otherwise experienced programmer. The programmer probably knows juuussstt enough f# to guide the llm.


I mean why is f# the goal, and could we write a better f# with the use of AI.

As an example, why not write in f# and let an 'AI-compiler' optimise the code...

The AI-compiler could then make sure all the code is type safe, add in manual memory management to avoid the pitfalls of garbage collection, add memory safety etc - all the hard bits.

And then if we gave the AI-compiler those sort of responsibilities, then we can think about how this would impact language design in the longer term.

None of this is with current generation LLM's, but might be where we end up.


This doesn’t require AI, it requires higher-level languages than we have that can express intent more directly.


In current generation language there is definitely a trade-off between language productivity (i.e. speed of writing) and features such as speed and memory-safety.

So far we haven't been able to close this gap fully with current compilers and interpreters (i.e. python still runs slower than C).

It seems like that gap could be closed through, for example, automated refactoring into a rust-like language during compilation, or directly into more-efficient byte-code/ASM that behaves identically.

And surely if that is a possibility, that would affect language design (e.g. if you can abstract away some complexity around things like memory management).


To this day i am still wondering what kind of code people write that chatgpt can possibly help with. All my attempts lead to garbage and i would spend more time fixing the output of the chat bot than writing the actual code. It does help with some documentation. But even that has glitches.


No one uses it to generate code. Really. Talk to people who actually use it and listen to what they say… they use it to help them write code.

If you try to generate code, you’ll find it underwhelming, and frankly, quite rubbish.

However, if you want an example of what I’ve seen multiple people do:

1) open your code in window a

2) open chatgpt in window b (side by side)

3) you write code.

4) when you get stuck, have a question, need advice, need to resolve an error, ask chatgpt instead of searching and finding a stack overflow answer (or whatever).

You’ll find that it’s better at answering easy questions, translating from x to y, giving high level advice (eg. Code structure, high level steps) and suggesting solutions to errors. It can generally make trivial code snippets like “how do I map x to y” or “how do I find this as a regex in xxx”.

If this looks a lot like the sort of question someone learning a new language might ask, you’d be right. That’s where a lot of people are finding a lot of value in it.

I used this approach to learn kotlin and write an IntelliJ plugin.

…but, until there’s another breakthrough (eg. Latent diffusion for text models?) you’re probably going to get limited value from chatgpt unless you’re asking easy questions, or working in a higher level framework. Copy pasting into the text box will give you results that are exactly as you’ve experienced.

(High level framework, for example, chain of thought, code validation, n-shot code generation and tests / metrics to pick the best generated code. It’s not that you cant generate complex code, but naively pasting into chat.openai.com will not, ever, do it)


That matches my experience. It's a sort of shortcut to the old process of googling for examples and sifting through the results. And those results, I didn't typically cut and paste from them, or if I did, it was mostly as a sort of a scaffold to build from, including deleting a fair amount of what was there.

Many times it works really well, and it surfaces the kind of example I need. Sometimes it works badly. Usually when it's bad, going to the google/sift method has similar results. Which I guess makes sense, it couldn't find much to train on, so that's why it's answer wasn't great.

One area it works really well for me is 3rd party apis where their documentation is mostly just class/function/etc. ChatGPT generally does a good job of producing an orchestrated example with relevant comments that helps me see the bigger picture.


Me too. As someone who used to be a dev but hasn't written code professionally in twelve years or so, it was such an amazing accelerant. My iteration loop was to contextualize it (in English and in code), ask how to do a thing, look at its response, tweak it, execute, see what happened, alter it some more.

The fact that it usually had errors didn't bother me at all -- it got much of the way there, and it did so by doing the stuff that is slowest and most boring for me: finding the right libraries / functions / API set up, structuring the code within the broader sweep.

Interesting side note: un-popular languages, but ones that have been around for a long time and have a lot of high-quality and well-documented code / discussion / projects around, are surprisingly fecund. Like, it was surprisingly good at elisp, given how fringe that is.


With GPT-4, you can often just paste the error message in without any further commentary, and it will reply with a modified version of the code that it thinks will fix the error.


And then you waste time fixing the error the "fix" gpt introduced. Clever.


I've used it on this side project for:

- A rough crash course in F#. I'll say "what's the equivalent in F# of this C# concept?". It will often explain that there is no direct concept, and give me a number of alternative approaches to use. I'll explain why I'm asking, and it'll walk through the pros/cons of each option.

- Translating about 800 lines of TypeScript JSON schema structures to F#. A 1:1 translation is not possible since TypeScript has some features F# doesn't, so ChatGPT also helped me understand the different options available to me for handling that.

- Translating psuedo-code/algorithms into idiomatic F# as a complete F# beginner. The algorithms involve regex + AST-based code analysis and pattern matching. This is a very iterative process, and usually I ask for one step at a time and make sure that step works before I move onto the next.

- Planning design at a high-level and confirming whether I've thought through all the options carefully enough.

- Adding small features or modifications to working code: I present part of the function plus relevant type definitions, and ask it for a particular change. This is especially useful when I'm tired - even though I could probably figure it out myself, it's easier to ask the bot.

- Understanding F# compiler errors, which are particularly verbose and confusing when you're new to the language. I present the relevant section of code and the compiler error and 90% of the time it tells me exactly what the problem and solution is; 5% of the time we figure it out iteratively. The last 5% tends I have to stumble through myself.

- Confirming whether my F# code is idiomatic and conforming to F# style.

- Yes it makes mistakes. Just like humans. You need to go back and forth a bit. You need to know what you're doing and what you want to achieve; it's a tool, not magic.

Note: this is the commercial product, ChatGPT-4. If you're using the free ChatGPT 3.5, you will not be anywhere near as productive.


I used chatgpt4 to generate python code which generates c++ code for a hobby project of mine using a library I've never used before. The iteration speed is ridiculously good and no one in any of the IRC or discord channels I visited pointed me even in the general direction of such a simple solution.

https://chat.openai.com/share/d041af60-b980-4972-ba62-3d41e0... https://github.com/Mk-Chan/gw2combat/blob/master/generate_co...


Programmers[1] have a complexity bias that interferes with the idea that LLMs can write useful code.

I had a problem last week where I wanted to extract sheet names and selection ranges from the Numbers app for a few dozen spreadsheets. ChatGPT, came up with the idea of using Apple script and with a but of coaxing wrote a script to do it. I don't know ApplesScript and I really don't want to learn it. I want to solve my problem and its 10 lines of AppleScript did just that.

We're nowhere near LLMs being capable of writing codebases, be we are here for LLMs being able to write valuable code because those concepts are orthogonal.

1. some, most


I am confused - the "code" you described is likely a google search away. Well I mean google has become useless but when it worked it was able to find such stuff in one search. So really all I am getting is that gpt is a better google.


I'm not quite understanding. It sounds like I was supposed to use google search before it stopped working?

A great counterexample would be a google query which includes the answer in the results.


It’s good at a beginner, early intermediate level when you need help with syntax and structuring basic things. It’s an excellent helper tool at that stage.

But it’s obvious outside of jr dev work and hobby projects there’s no way it could possibly grasp enough context to be useful.


Stage? A lot of developers don't realize that they're all the same personality type which is good at particular things. LLMs give everyone else this advantage. You just don't realize it yet because you were never aware of the advantage in the first place.


> they're all the same personality

Yeah usually it was the shittiest managers I ever met that shared this belief. It sounds like they all repeat the same thing - gpt is a better google.


Are you calling me a shitty manager? Would you say this to my face? What's wrong with you?


I am not, but there's a tendency among those types that categorise people into narrow sets that they can understand. Also your statement doesn't much make sense. Being a good developer means you understand a wide range of issues, not just spelling in a language and adding if statements. The combination of personalities vary wildly. To be fair, LLMs if anything, will help developers become better managers, simply because developers understand what needs to be done. Instead of decyphering what someone meant by requesting a vague feature, you can ask a statistical system - an ai as some call it - what the average joe wants. And then get it done.


> I am not, but there's a tendency among those types that categorise people into narrow sets that they can understand

Those types huh


I'm not really following what you're trying to say. What is that personality type, and what are the things its good at? What is the LLM advantage?

Not saying your post is devoid of any substance, just trying to educate myself on my blindspots


There are three types of intelligence: intuitive, cognitive and narrative.

Tech has, for years, seen people with cognitive/narrative intelligence as the people who are actually smart.

LLMs help the intuitive people reach the level of the cognitive/narrative people. Cognitive/narrative people can't really understand this in the same way the intuitive people are bad at syntax or database structure. The steamrolling will be slow and merciless.


Could you give a concrete example of where ChatGPT is likely to provide a competitive advantage to intuitive people with weak cognitive/narrative capabilities?


Regex.


Did some ETL with Python. ChatGTP got it right 99%. And, remarkably understood a public API that I was feeding from, which used 2-letter abbreviations.


If you are trying to do something with an api that you have no experience with it will get you up and running quickly. e.g. How do I get all kubernetes configmaps in a given namespace older than n days in go? It gives you the bones about how you create and configure a client and query kubernetes to get the information that you are looking for. It's much quicker than googling and parsing a tutorial.


I’ve have used ChatGPT for

- De-obfuscate a obfuscated JS code

- Unminify a JS code. Asked it to guess function names based on the functionality

- Work with it like a rubber duck to plan out the possible solutions to a code problem

- To suggest function names based on the functionality

- To name repos

- Modify a piece of Go code to add specific functionality to it. I don’t know to write Go; I can read it and grok the high level functionality


It's what Clippy always wanted to be.


Treat it like your rubber duck.


Are you using 3.5 or 4?


I'm the author, and I don't disagree with this at all - I do use LLMs pretty meaningfully in my day-to-day engineering work, and I definitely agree that they hold a ton of promise in situations like the one you mentioned. To be clear, I'm very much bullish on the applications of LLMs to e.g. coding! The point I was trying to make there was just that for _certain_ tasks, the process of chatting with an LLM is, by nature, less precise and more arduous than a purpose-built UX. By analogy, we might have "describe-your-change" functionality in Photoshop, but its no replacement for all the other pixel-perfect editing functionality that it provides, and I'd struggle to imagine a world where Photoshop is ever _replaced entirely_ by a chat UX


Ah haha I am going through the exact same experience but picked up F# about 4 months ago and ChatGpt has been an absolute godsend. There is a whole category of blockers in learning that have been eliminated.


I feel like such a jerk for not wanting to use chatgpt for all that. It's not just mild paranoia about feeding it input I prefer be kept private but I just don't enjoy having someone else do half the thinking for me I guess.

I mean, I don't like intellisense either (but simple autocomplete is fine). Perhaps it is because I only code to help with my job, I don't get paid to have a lot of good quality code output.


I think I would have felt this way when I was younger. But after writing code for over twenty years, I'm very happy to let computers do the boring half of the thinking, so that I can think about more interesting things.

It is not intellectually stimulating for me to think about what the syntax for a dict comprehension is, or what exactly I'm supposed to do to map over the values of an array in javascript without screwing it up, or any of a million other kinds of minutia. Computers know the answers to these uninteresting questions.


That's what's been so striking for me -- the stuff that is fun and playful for me I get to do; and a bunch of stuff that I hated I can now offload, with the net result that I get to work at the level of abstraction that is most interesting and most human, vs acting like a human robot trying to unearth documentation, API examples, etc.

There's no question about this being the right way to use it for me, but I wonder if this could introduce something bad for someone just starting out, who hadn't first got all the reps with the drudgery over decades? Still mulling on that.


Yeah, I've said a bunch that I'm worried about people starting out from scratch. So much of what I do after I get responses from LLMs is best described as "taste", but how would I have developed that taste without the years of slogging through?

But I also think this is probably just a classic geezer's concern about kids being on my lawn, and it will probably work out just fine for these youths :)


Eh. I think the peak of computer ability was GenX. I’m not in that group but made up for it with sheer intensity. If you grow up on a phone you’re not getting any of the mental modeling to get close to what the machine is doing and you’re only going to be able to work with the limited tools of that level until you deliberately dive deeper.


I think both you and the GP are onto something important. I am one of the vaunted Gen-Xers (this is the first time someone has said anything nice about us, so thanks) and there is something to the idea that my understanding of systems goes down to assembly language (fully) and even CPU architecture (very sketchily) all the way up to, like, web frameworks. So for the stuff I'm usually doing, I understand several layers up and down, and that often seems valuable.

This strikes me as the essence of expertise, the difference between talking to someone who's read a book or a review paper, vs someone who is a real expert at that thing. You can be quite competent and insightful from having done a modest amount of reading and thinking, but if you're seeing the Matrix, and know how all these forces are interacting with each other, that's just a whole other level. It makes an impression when you brush up against it.

However: it always weighs on me how this is all a matter of framing. Like, I don't know the electronics. There's a level of metal that I don't get. Some of the really high-level stuff is now beyond me, too. I catch myself saying "I should really take x {weeks, months, years} and really grok that." And yet my actual experience suggests this is a mirage.

More briefly: there are always more layers for a fuller understanding. It's hard to see how many of them are really useful. Maybe the kid who is 10x better than me at LLM collaboration will benefit more than from having a deeper stack. It's interesting to ponder how these different consequences will play out.


FWIW I'm a (geriatric) millennial, and we were totally taught all that same stuff. And yeah, it totally feels important to me to have this mental model up and down the layers. But I also do wonder whether we're just wrong about that, merely falling prey to nostalgia.

My messy answer is that for many projects it's either neutral or an actual hindrance - I have to remind myself to pragmatically not care about stuff that won't matter for years at the project's expected growth rate - but for other projects it's very useful. I think the way to go is to seek out those kinds of projects, in order to avoid frustration all around.


Always important to be on guard for things amenable to self-congratulation. At least, I sure need to be on guard against it :)


I think a lot of the drudgery is lack of language quality. But this is still a better argument than I’ve seen before for using it.


I dunno, I've used all kinds of languages, from the ones people often consider lower quality to the ones people often consider higher quality, and they all have something like this.

And maybe there's some as yet unexplored design space out there for which this isn't true, but my prior is that we're actually just circling around some essential complexity at the core of the problem, which will never be eliminated.


It takes programming back to the pseudocode experience, but now the computer can fill in all the code bits really well about 75% of the time


Yeah, 75% seems a bit high, but it's a high enough percentage to be extremely useful to me.


Don't beat yourself up. As a mathematically-oriented software engineer I also have little to no interest in using LLMs to help with coding. It feels like the people that do are the builders, i.e. interested in quickly knocking up something that works most of the time. Which is great from a business perspective. I can't help but see my programming output as an artistic expression of my thought, and therefore find it difficult to give up the hands-on construction to others (be it LLMs or juniors).


I think the division is along a different axis than this one (or probably it's along multiple axes).

I've always been more of a fastidious crafter than a "just get it built" person, but I also struggle with a blank page. I thrive on editing more than writing. Since forever, I like to get out something that works or mostly works, and then start carving on it until I like it.

LLMs have been helping me get some ink on the page, but very little of what they suggest ends up in the final product.


I’ve observed for a long time that there’s a lot of value in making a seed, a starting point, for the thing a group needs to create. And that’s exactly the reason - new text is harder than criticism of the old.


Yep! But some people do seem to thrive more with a blank page.

Supposedly Tom Robbins writes books entirely by putting one word after another starting with the first one and finishing with the last one. I don't know if that's apocryphal, but I do think that's closer to the process for some people.

But if I were a writer, I'd be squarely in the "get out a first draft, it will be like pulling teeth, but just get something down; then you can do the fun part of revising and polishing".


Mathematically oriented software engineers have the most to benefit off of software assistance due to algebraic designs.

As one myself LLMs don't feel like giving up control it feels like collaboration. You probably don't like collaboration.


Claiming that someone doesn’t like collaboration because they don’t want to use ChatGPT is a very unreasonable and biased take.


Not especially more biased than GGP's claim that people who use LLMs as coding assistants are "builders, i.e. interested in quickly knocking up something that works most of the time".


Could you expand on the comment about "algebraic designs"?

When I interact with an LLM it feels like I'm speaking to someone who lacks inherent conceptual understanding, but is doing their utmost best to bullshit me. This is problematic when it comes to algorithmic thinking.

I sometimes need to solve problems that involve tweaking existing algorithms in somewhat novel ways. For example "k-medoid clustering but with various capacity constraints on clusters", or "contraction heirarchies but accounting for varying vehicle dimension restrictions".

I love collaborating with smart colleagues to come up with ideas to solve these problems. Seeing chat-gpt fail to solve very simple traveling salesman type problems gives me little confidence that it can do novel conceptual tihnking.

EDIT

To test my own claims, I just tried asking Chat-GPT to make this modification to k-means clustering. It produced something that will give a result but will likely not be very optimal. When prompted to refine the solution, it failed to understand the issue and instead proposed the exact same solution just with a bit more implementation detail.

User

Suggest a modification to the k-means algorithm where the number of elements assigned to each cluster is limited to a pre-defined maximum value.

ChatGPT

To modify the traditional k-means algorithm to limit the number of elements assigned to each cluster to a pre-defined maximum value, you can create a variation of k-means called "k-means with cluster size constraints." This modification ensures that no cluster exceeds the specified maximum size while still optimizing cluster centroids based on the sum of squared distances. Here's a high-level overview of the modified algorithm:

1 Initialize centroids: Start by randomly selecting K initial centroids, where K is the desired number of clusters.

2 Assign data points to clusters: Assign each data point to the nearest centroid based on Euclidean distance, just like in the traditional k-means algorithm.

3 Update centroids: Recalculate the centroids of each cluster by taking the mean of all the data points assigned to that cluster.

4 Check cluster sizes: After updating the centroids, check the size of each cluster. If any cluster exceeds the pre-defined maximum size, proceed to reassign some data points from that cluster to other clusters. Here's how you can do this: Identify the cluster(s) that exceed the maximum size. For each over-sized cluster: Sort the data points in that cluster by their distance to the cluster centroid in ascending order. Start reassigning the farthest data points to other clusters (the ones with the smallest distances) until the cluster size constraint is satisfied. Update the centroids of the affected clusters accordingly.

5 Repeat steps 2 and 3: Continue iterating between data point assignment and centroid updating until convergence, or until a predefined number of iterations have been reached.

6 Termination: The algorithm terminates when either the cluster assignments do not change significantly between iterations or after a fixed number of iterations.

Here are a few things to consider when implementing this modification:

    You will need to experiment with the order in which you reassign data points from over-sized clusters to minimize the impact on the overall clustering quality.

    You should set an appropriate stopping criterion to prevent the algorithm from running indefinitely.

    Be prepared to handle situations where it is impossible to meet the cluster size constraint while maintaining good cluster quality. In such cases, you may need to relax the constraint or consider alternative clustering methods.
This modified k-means algorithm with cluster size constraints allows you to control cluster sizes while still finding meaningful clusters in your data.

User

Your proposed solution has an issue that points might be reassigned even if there are clearly no other suitable clusters. Please tweak the solution to avoid this issue.

ChatGPT

To address the issue of potentially reassigning points even when there are no other suitable clusters, we can modify the approach by introducing a priority queue or heap data structure to efficiently manage the reassignment process. Here's an improved version of the k-means algorithm with cluster size constraints:

1 Initialize centroids: Start by randomly selecting K initial centroids, where K is the desired number of clusters.

2 Assign data points to clusters: Assign each data point to the nearest centroid based on Euclidean distance, just like in the traditional k-means algorithm.

3 Update centroids: Recalculate the centroids of each cluster by taking the mean of all the data points assigned to that cluster.

4 Check cluster sizes: After updating the centroids, check the size of each cluster. If any cluster exceeds the pre-defined maximum size, proceed to reassign data points from that cluster to other clusters. Here's how you can do this: Identify the cluster(s) that exceed the maximum size. For each over-sized cluster: Create a priority queue (or heap) to store the data points in the cluster, ordered by their distance to the cluster centroid in ascending order. Start reassigning the farthest data points to other clusters (the ones with the smallest distances) until the cluster size constraint is satisfied. Update the centroids of the affected clusters accordingly. Continue this process until either the cluster size constraint is met or there are no more suitable clusters to reassign points to.

5 Repeat steps 2 and 3: Continue iterating between data point assignment and centroid updating until convergence, or until a predefined number of iterations have been reached.

6 Termination: The algorithm terminates when either the cluster assignments do not change significantly between iterations or after a fixed number of iterations.

This modified approach ensures that points are only reassigned to clusters that are suitable and have room to accommodate additional points while maintaining the overall quality of the clustering solution. The use of a priority queue helps prioritize the reassignment of points to clusters with the least impact on their quality.

Additionally, you should set an appropriate stopping criterion to prevent the algorithm from running indefinitely, and you may need to handle situations where it is impossible to meet the cluster size constraint while maintaining good cluster quality by relaxing the constraint or considering alternative clustering methods.


How is what you're doing mathematical? I mean you can call CS "math", but then I don't know what you mean by more "mathematical." Traditional cs algos are not what I term as more mathy from my pov. Maybe you can call stats "mathy" but this is more applied math and not too different from what "builders" do.

Also what you're doing here is asking chatGPT for the answer. chatGPT is more effective via collaboration. Meaning instead of shoving the entire problem down it's throat and asking it to solve it, you ask it for advice. Ask it for bits and pieces of things.


To some extent I agree with you, in that all software engineers have to think about refactoring and time complexity, involving a logical or mathematical style of thinking. However, there's definitely a spectrum of how distinctly mathematical the work of a software engineer is.

A lot of front-end development for example does not require familiarity with any algorithms, formulae or mathematical structures. You might need to reason about large systems and have a rough idea of when hashmaps are useful, but the bulk of the work is constructing an interface according to functional business requirements. I frequently see comments here along the lines of "why am I being interviewed about algorithms when I'll never use them in my job".

A more mathematically oriented developer may be in the business of modelling and predictions. They may be confronted with a novel real world problem involving traffic, or trading, or electricity networks, that potentially no one has tried to solve before. They may be required to find a mathematical structure that closely approximates real world behaviour, implement that structure via code, and deploy a continuous model which allows their client to analyse and project.

Of course, you also have academic mathematicians using software like Maple or SageMath to assist with their research. This is another level more mathematical. Perhaps what you're getting at is that people can ask ChatGPT questions like "write me some Sage code to get the Delaunay triangulation of this set of points". I totally agree that it can probably do well at these tasks.


Modelling stuff sounds like data science. It's a term they often use and it sounds very much like the same deliverables you mentioned. I've never seen a data scientist term themselves as more mathematically oriented. Also let's not play around, what you actually meant to say mathematically "superior". That much is clear. Sounds like you know stats and you think that makes you more "mathematical".

You also talk about things like traffic. Modelling traffic is mathematical? Sounds like a simulation to me. Man take a look at GTA. That game is almost entirely made by builder engineers creating simulations. It's the same shit and likely far more advanced then what any data scientist can come up with.

Anyway from your example and from what Ive seen it sounds like you're still doing the same thing. CS algorithms. You're just using algorithms that aren't likely very popular or very specific to data and stats. But adjusting stuff like k-means clustering still sounds like regular cs stuff to me.

There's no point in calling it more "mathematical" because it's not. The builder engineer who wrote all the systems in GTA or even say red dead redemption use a ton of "math" even and they don't term themselves more "mathematical" even though their simulations are likely more complex than anything you will ever build.

That's why when you called your self mathematically superior (again don't deny this.. we all know what you really mean here) I thought you were talking actual math. Because if you looked at a math equation it doesn't look anything like an algorithm. Math equations are written as a single expression. Math equations model the world according to a series of formula. It's very different to a cs algorithm.

Mathematical oriented programming involves largely the same thing and using algebras of mathematics.

If you're not doing this just call it data science instead of trying to call yourself more "mathematical". If you truly were more mathematically oriented you would know what I'm talking about.

Geeze some guy writing "models" and doing some applied math+stats like what every other freaking programmer out there is doing and he calls himself more "mathematically oriented."


Statistics isn't my strongest area. But I do have a doctorate in quantum information theory, so I have some idea of what it means to be mathematical.

Data science definitely forms part of what I do, as my employer stores a lot of data that we use to estimate various parameters. But there's also work on creating bespoke routines for solving vehicle routing problems in niche domains, which I wouldn't really class as data science.

Thanks for the discussion, anyway. I'm not interested in being insulted.


No one is interested in being insulted. But you only feel insulted because what I said is 100 percent true.

"Bespoke routines for vehicular routing problems" lol. I mean phrases like that reveal what you think of yourself as.

You're writing simulations. That's all. "Bespoke" lol. And those simulations have lower fidelity then a video game like GTA which likely does traffic at higher levels of fidelity and real time with a renderer.

I have a doctorate in mathematics. Prior to that I've done work in cs. Doesn't mean shit. I don't name drop that crap to pretend to be superior.


An LLM is going to be most helpful for topics that came up a lot during training. What you're discussing doesn't sound like it falls into that category.


I'd agree with that. I'd even go a step further and say that most of my work and things I'm interested in coding don't fall into that category. I have absolutely nothing against people who do find it useful, but I'm keen to reassure user badrabbit that they're not a jerk for not being interested in using LLMS.


I know exactly why I don't want to use ChatGPT.

I love to write code, I love to modify existing code too.

I do not love to read and then fix code after someone all the time. With ChatGPT I have to read then understand then fix code after ChatGPT every time.

Also, I do not love to fix code that often contains hallucination.


First, are you sure using GPT-4?

Second, have you tried pasting any syntax errors you get back into the chat window? Usually, GPT will try to fix them.

Third, you can have GPT write a function for you and then write a different version of the function using a different algorithm along with a test (e.g. a fuzz test) to check whether two functions produce the same output given the same input. This makes it less likely that an error will slip through because both algorithms would have to be produce the same incorrect output for the same input.


Yes, I used chatGPT-4.

Second, your fix is to verify code generated by LLM and then play with it trying to find a way to fix it. I`m quite sure that I will spend less time and less mental power by writing code in the first place.

Third fix is to play with chatGPT more, and read more code generated by chatGPT trying to find errors in it.

What happened with "reading code written by someone else is harder than writing your own code"?


I had similar success when trying to solve problems with Makefiles.


Yeah, for me it has been docker and kubernetes. I have found it to be something of a killer app for spinning up on anything with a bunch of annoying little details, that I'm not already super familiar with.

Make, docker, kubernetes, all fit this pattern. Heck, maybe I won't be so down on autotools if I run into it again now that I can attack it with LLM support.

Or html / css; I haven't written any of that in the LLM era, but maybe I'd enjoy it more now.


The llm effect may enable people to use languages that are harder to write, but produce a more maintainable and/or performant product. Maybe it's finally time to learn rust or ocaml!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: