Hacker Newsnew | past | comments | ask | show | jobs | submit | vipshek's commentslogin

I find this perspective bizarre. Though I'm not happy about it all being centralized, the closest thing we have these days to the very niche phpBB forums of the 2000s is various subreddits focused on very specific topics. Scrolling through the front page is slop, sure, but whenever I'm looking for perspectives on a niche topic, searching for "<topic> reddit" is the first thing I do. And I know many people without any connection to the software industry who feel the same way.


I would love to have some directory with all kinds of active (PHP) web forums. That was the heyday of the open web for me.


Perhaps swearing at the LLM actually produces worse results?

Not sure if you’re being figurative, but if what you wrote in your first comment is indicative of the tone with which you prompt the LLM, then I’m not surprised you get terrible results. Swearing at the model doesn’t help it produce better code. The model isn’t going to be intimidated by you or worried about losing their job—which I bet your junior engineers are.

Ultimately, prompting LLMs is simply a matter of writing well. Some people seem to write prompts like flippant Slack messages, expecting the LLM to somehow have a dialogue with you to clarify your poorly-framed, half-assed requirement statements. That’s just not how they work. Specify what you actually want and they can execute on that. Why do you expect the LLM to read your mind and know the shape of nginx logs vs nginx-ingress logs? Why not provide an example in the prompt?

It’s odd—I go out of my way to “treat” the LLMs with respect, and find myself feeling an emotional reaction when others write to them with lots of negativity. Not sure what to make of that.


That's more my inner monologue than what is typed into the LLM.


I would like to propose a moratorium on these sorts of “AI coding is good” or “AI coding sucks” comments without any further context.

This comment is like saying, “This diet didn’t work for me” without providing any details about your health circumstances. What’s your weight? Age? Level of activity?

In this context: What language are you working in? What frameworks are you using? What’s the nature of your project? How legacy is your codebase? How big is the codebase?

If we all outline these factors plus our experiences with these tools, then perhaps we can collectively learn about the circumstances when they work or don’t work. And then maybe we can make them better for the circumstances where they’re currently weak.


I feel like diet as an analogy doesn't work. We know that the only way to lose weight is with a caloric deficit. If you can't do this, it doesn't matter what you eat you won't lose weight. If you're failing to lose weight because of a diet you are eating too much, full stop.

Whereas measuring productivity and usefulness is way more opaque.

Many simple software systems are highly productive for their companies.


Meridian | Founding Engineers (Product, Infra) | NYC, New York (In-person) | https://careers.meridian.tech | Full-time

Meridian develops software to accelerate the next generation of companies building in the physical world across aerospace, defense, automotive, robotics, and more. We automate the administrative work of quality and compliance to help our customers go to market faster, scale their production, and increase their pace of innovation.

Meridian is 3 months old. We’ve already signed paying customers, built and launched our product, and raised an oversubscribed pre-seed round.

For our three first hires, we’re looking for world-class generalist engineers who can ship great product experiences fast while laying the foundations for a platform that will scale to large and complex enterprises in the future. We're offering competitive salaries and above-market equity.

We're building an in-person engineering team that prides itself on shipping excellent products for a user segment (quality engineers in manufacturing) that's been sorely neglected in the past. We ship with speed and quality, own a large product surface area, and are relentlessly customer-focused.

To apply, send us your resume and anything else you’d like to careers@meridian.tech.


This is excellent!

I think the utility of generating vectors is far, far greater than all the raster generation that's been a big focus thus far (DALL-E, Midjourney, etc). Those efforts have been incredibly impressive, of course, but raster outputs are so much more difficult to work with. You're forced to "upscale" or "inpaint" the rasters using subsequent generative AI calls to actually iterate towards something useful.

By contrast, generated vectors are inherently scalable and easy to edit. These outputs in particular seem to be low-complexity, with each shape composed of as few points as possible. This is a boon for "human-in-the-loop" editing experiences.

When it comes to generative visuals, creating simplified representations is much harder (and, IMO, more valuable) than creating highly intricate, messy representations.


Have you looked at https://www.recraft.ai/ recently? The image quality of their vector outputs seems to have gotten quite good, although you obviously still wouldn't want to try to generate densely textured or photographic-like images like Midjourney excels at. (For https://gwern.net/dropcap last year or before, we had to settle for Midjourney and create a somewhat convoluted workflow through Recraft; but if I were making dropcaps now, I think the latest Recraft model would probably suffice.)


Link to their vector page, since the main page makes them look like yet another AI image generator:

https://www.recraft.ai/ai-image-vectorizer

The quality does look quite amazing at first glance. How are the vectors to work with? Can you just open them in illustrator and start editing?


No, I actually was referring to their native vector AI image generator, not their vectorizer - although the vectorizer was better than any other we found, and that's why we were using it to convert the Midjourney PNG dropcaps into SVGs

(The editing quality of the vectorized ones were not great, but it is hard to see how they could be good given their raster-style appearance. I can't speak to the editing quality of the native-generated ones, either in the old obsolete Recraft models or the newer ones, because the old ones were too ugly to want to use, and I haven't done much with the new one yet.)


I was under the impression that their AI Vector generator generates a PNG and vectorizes under the hood.


Hm... I was definitely under the impression that it is generating SVGs natively, and that was consistent with its output and its recent upgrades like good text rendering, and I'm fairly sure I've said as much to the CEO and not been corrected... But I don't offhand recollect a specific reference where they say unambiguously that it's a SVG generator rather than vectorizer(raster), so maybe I'm wrong about that.


For me its based on that vector generation is much harder than raster, recraft has raised just over $10M (not that much in this space), and their api has no direct vector generation.


There is also the possibility for using these images as guidance for rasterization models. Generate easily manipulatable and composible images as a first stage then add detail once the image composition is satisfactory.


Trivially possible with controlnets!


My little project for the highly intricate, messy representation ;) https://github.com/KodeMunkie/shapesnap (it stands on the backs of giants, original was not mine). It's also available on npm.


I always imagine how useful Sora.ai could be if it would generate 3D models to render their animations from instead


I agree, that's the future of these video models. For professional use you want more control and the obvious next step towards that is to generate the full 3D scene (in the form of animated gaussian splats since that's more AI friendly than the mesh based 3D). That also helps the model to be more consistent but also adds the ability for the user to have more control over the camera or the scene.


I couldn't agree more. I feel that the block-coding and rasterized approaches that are ubiquitous in audio codecs (even the modern "neural" ones) are a dead-end for the fine-grained control that musicians will want. They're just fine for text-to-music interfaces of course.

I'm working on a sparse audio codec that's mostly focused on "natural" sounds at the moment, and uses some (very roughly) physics-based assumptions to promote a sparse representation.

https://blog.cochlea.xyz/sparse-interpretable-audio-codec-pa...


itneresting. I'm approaching music generation from another perspective:

https://github.com/chaosprint/RaveForce

RaveForce - An OpenAI Gym style toolkit for music generation experiments.


Ah, we should be friends!

I'm not sure what else to add, except that these are exactly the thoughts I think, and it used to feel lonely ;)


Install Cursor (https://cursor.com), go into Cursor Settings and disable everything but Claude, then open Composer (Ctrl/Cmd + I). Paste in your exact command above. I bet it’ll do something pretty close to what you’re looking for.


I've completely switched over to Cursor from Copilot. Main benefits:

1. You can configure which LLMs you want to use, whereas Copilot just supports OpenAI models. I just use Claude 3.5 for everything.

2. Chatting with the LLM can produce file edits that you can directly apply to your files. Cursor's experimental "Composer" UI lets you prompt to make changes to multiple files, and then you can apply all the changes with one click. This is way more powerful than just tab-complete or a chat interface. For example, I can prompt something like "Factor out the selected code into a new file" and it does everything properly.

3. Cursor lets you tune what's in LLM context much more precisely. You can @-mention specific files or folders, attach images, etc.

Note I have no affiliation whatsoever with Cursor, I've just really enjoyed using it. If you're interested, I wrote a blog post about my switch to Cursor here: https://www.vipshek.com/blog/cursor. My specific setup tips are at the bottom of that post.


Which state is your town located in, out of curiosity? I'm trying to build a mental rolodex of which states have towns that are development-friendly.


Washington. But it also I think depends on which developer you are - seems like two developers here have 90%+ of the new developments.


Many AI-generated images you encounter are low-effort creations without much prompt tuning, created using something like DALL-E or Llama 3.1. For whatever reason, the default style of DALL-E, Llama 3.1, and base Stable Diffusion seems to lean towards a glossy "photorealism" that people can instantly tell isn't real. By contrast, Midjourney's style is a bit more painted, like the cover of a fantasy novel.

All that being said, it's very possible to prompt these generators to create images in a particular style. I usually include "flat vector art" in image generation prompts to get something less photorealistic that I've found is closer to the style I want when generating images.

If you really want to go down the rabbit hole, click through the styles on this Stable Diffusion model to see the range that's possible with finetuning (the tags like "Watercolor Anime" above the images): https://civitai.com/models/264290/styles-for-pony-diffusion-...


DALL-E's style is intentional to prevent misuse of fake but near-undetectable highly realistic images.


whats a good prompt if i want a schematic/engineering/wireframe look for all my objects ?

Most models seem very reluctant to do that. (Historically, rendering full 3D was also easier than rendering wireframes. Art imitating life.)


Uhm, Llama 3.1 is an LLM.


Ah, my mistake. "Meta AI" can generate both text and images, but apparently text prompts are handled by Llama 3.1 while image prompts are handled by Emu. I initially struggled to find the name of the image generation model.


Oh, I didn't even realize Facebook had a text to image model.


Just to clarify, are you saying these are relatively new BEV trucks coming into your shop? If so, do the mileage issues usually boil to battery degradation due to driver behavior as you’re suggesting, or is it because the EPA rated mileage was unrealistic in the first place? Or both?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: