Hacker News new | past | comments | ask | show | jobs | submit | osm3000's comments login

I just read it in the referenced book section from the parent comment. It shocked the imaginary bubble where my mind is a bit. I want to reflect more on it.

Somehow, in the midset of all these LLM and diffusion models, the only thing that seems to catch attention is creativity. I've not thought of experience.


Experience makes creativity harder, but that's what mature creativity is. Did anyone tell you it wouldn't be work?

The people who are most awed by LLMs are those people most used to having to be merely plausible, not correct.


Why Tor is graded C, even though there are no downsides?


See the same question down this page: https://news.ycombinator.com/item?id=43534479


Of course humor :D

It was just cool to see it as the one odd point :)


I am curious too


I can't believe any of this made a difference in privacy. There is ZERO chance that the law can be enforced here. I've worked in few startups in Europe, no one understand their obligation, let alone the consequences from third party services.

This whole cookie banners, and GDPR in general, is as good as literature.


What is the name of that search engine?

I agree that this is what an effective search strategy is. But it is difficult to imagine an exploration this way. Exploration always imply that I don't know what I am looking for.

I am working on a plugin to filter DuckDuckGo the search results with LLM (the magic word). Perhaps that will help


I loved Keras at the beginning of my PhD, 2017. But it was just the wrong abstraction: too easy to start with, too difficult to create custom things (e.g., custom loss function).

I really tried to understand TensorFlow, I managed to make a for-loop in a week. Nested for-loop proved to be impossible.

PyTorch was just perfect out of the box. I don't think I would have finished my PhD in time if it wasn't for PyTorch.

I loved Keras. It was an important milestone, and it made me believe deep learning is feasible. It was just...not the final thing.


Keras 1.0 in 2016-2017 was much less flexible than Keras 3 is now! Keras is designed around the principle of "progressive disclosure of complexity": there are easy high-level workflows you can get started with, but you're always able to open up any component of the workflow and customize it with your own code.

For instance: you have the built-in `fit()` to train a model. But you can customize the training logic (while retaining access to all `fit()` features, like callbacks, step fusion, async logging and async prefetching, distribution) by writing your own `compute_loss()` method. And further, you can customize gradient handling by writing a custom `train_step()` method (this is low-level enough that you have to do it with backend APIs like `tf.GradientTape` or torch `backward()`). E.g. https://keras.io/guides/custom_train_step_in_torch/

Then, if you need even more control, you can just write your own training loop from scratch, etc. E.g. https://keras.io/guides/writing_a_custom_training_loop_in_ja...


> it was just the wrong abstraction: too easy to start with, too difficult to create custom things

Couldn’t agree with this more. I was working on custom RNN variants at the time, and for that, Keras was handcuffs. Even raw TensorFlow was better for that purpose (which in turn still felt a bit like handcuffs after PyTorch was released).


Keras was a miracle coming from writing stuff in Theano back in the day though.


I didn't realize Keras was actually released before Tensorflow, huh. I used Theano quite a bit in 2014 and early 2015, but then went a couple years without any ML work. Compared to the modern libraries Theano is clunky, but it taught one a bit more about the models, heh.


Wow that gives me flashbacks to learning Theano/Lasagne, which was a breath of fresh air coming from Caffe. Crazy how far we've come since then.


Of course, it's easy to be ideological and defend technology A or B nowadays, but I agree 100% that in 2016/2016 Keras was the first touchpoint of several people and companies with Deep Learning.

The ecosystem, roughly speaking was: * Theano: Verbosity nightmare * Torch: Not-user friendly * Lasagne: A complex attraction on top of Theano. * Caffe: No flexibility at all, anything not the traditional architectures would be hard to implement * Tensor Flow: Unnecessarily complex API and no debuggability

I do not say that Keras solved all those things right away, but honestly, until just the fact that you could implement some Deep Learning architecture in 2017 on top of Keras I believe was one of the critical moments in Deep Learning history.

Of course today people have different preferences and I understand why PyTorch had its leap, but Keras was in my opinion the best piece of software back in the day to work with Deep Learning.


And PyTorch was a miracle after coming from LuaTorch (or Torch7 iirc). We’ve made a lot of strides over the years.


I am working on Omie, https://webapp.omie42.com/, yet another language learning tool :D

You start by learning the words: It augments the repetition aspect of Anki with added context to the words (random example sentences for each work, each time). I curated / generated the set of words in advance. Then you move to practice these words in chat about different topics (soon voice conversation as well). You get a feedback in each turn about what your mistakes, without interrupting the chat.

I am using DeepL for translation (I am not liking it though, since it is very narrow strict definitions. I will be exploring OpenAI for that soon). For chat, OpenAI GPT-4o.

This is my first webapp. I used HTMX + AlpineJS, Python, Supabase (Auth, S3, DB), and hosting on my PI.

It's work in progress, but I need to start finding core users to give feedback. I am not really sure how though. I had some tough experiences on Reddit and Discord (understandable tbh).


That was a lot of fun! Thank you :)


True, but if we take their word on it https://openai.com/enterprise-privacy

then i don't see an issue


Taking a privacy policy at its word is not really "protecting privacy", in my opinion. It's just thinking that as long as they pinky-swear, then there is no privacy issue.

Even if the company intends on adhering to the policy, the policy does nothing to protect against that company itself being hacked.

It still represents a risk.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: