Hacker News new | past | comments | ask | show | jobs | submit | mikesurowiec's comments login

Thanks for pointing out Parcel RSC. I just read through the docs and they do a great job of explaining RSCs from a place I can understand. In contrast to NextJS where it’s unclear where the framework stops

https://parceljs.org/recipes/rsc/


"unclear where the framework stops" is a great way to phrase that issue. It's something I've run into with NextJS in a few contexts.

I really appreciate when frameworks leverage standards and/or indicate the boundaries of the framework as much as possible.


I expect data centers will become more expensive precisely because everyone is building at the same time. Supply chain crunch


Temporary. During their operating and depreciating long tail phase the over supply will drive down costs for users. Like fiber cables.


I’ve seen people spin Results in their favor within a company, where you can validate them, but for interviews it seems nearly impossible to validate.

Does anyone have techniques for that?


My job as a developer (until 2020) was to ensure the project or major feature I was over was done on time, on budget and meets requirements.

My job was not to go out and sale the product whether the product made $0 or $1 million dollars I had no control over.

The other thing I need to communicate is that I am capable of working at the level of scope, impact and ambiguity required for the job.

https://www.levels.fyi/blog/swe-level-framework.html

I didn’t understand that myself until a decade ago. Before the gatekeeping starts (not by you), yes it got me through a 5 round behavioral loop at BigTech (AWS’s consulting department) and after leaving, now a “staff software architect” at a third party consulting company (both full time direct hires).


> ensure the project or major feature I was over was done on time, on budget and meets requirements.

The issue with this is that the bounds are drawn by someone else, the best you can do is 'meet them'. No one really cares if you save 90% of the budget, it was already allocated and will just get funnelled off somewhere else. It doesn't matter if it's early, because they probably didn't need it until they said they needed it, and 'meets requirements' is a given.

Compare this to a sales job or something more outward facing, a sales person might have targets but can blow them out of the water with some luck and skill (and get paid commission). They aren't operating within someone else's small framework, but a free variable against the open market.


It’s not that simple. It’s a negotiation up front if you are responsible for a feature/project. You talk to the stakeholders and let them prioritize what’s most important - budget, time, requirements - and you talk to them about the tradeoffs.

Your leverage comes from working on larger more impactful projects that have more impact and scope.

As a mid level employee I was responsible for smaller projects, now I’m responsible for larger projects with multiple “work streams”, more people under me and closer to the “business” and sales”


Also equally thankless when one team meets all goals/deadlines for their small product but the rest of the department is a dumpster fire — making the entire product suite unusable.


I like to call that not being on the critical path of company success: whenever you can, push to get your team onto that path, and if management can buy into OKR as methodology, which can help achieve valuable alignment (as long as they don't misapply OKRs for a regular "these are features we want").


The other team is outside of my circle of influence and control.

But at the end of the day, did money get put in my account?

Course language:

https://youtu.be/3XGAmPRxV48?si=ibxkZ2_GYaITjiWt


You can't practically validate it.

This is why it's important not just to ask about previous results. This is also why you see so many "solve this random programming problem" type interviews - they hope (wrongly) that it's less fakeable and somehow gives you an idea of how they will do in the future.

I don't find those particularly useful (like many), i instead try to understand how they think and approach things.

If this is a manager, for example, give them real organizational problems you've seen, and ask them how they would approach them, and walk you through their thought process, etc. You will often start to get "weird" answers with fakers or spinners, especially if you start to ask about anything related to performance or improving it (again, in my experience, YMMV, etc). One idle theory (IE i don't claim this is correct in any meaningful way) i had about this was that a lot of the didn't actually know how to help people or organizations, so if you force them to try to explain how they approach it for real, they start to fall apart. Instead of thinking about that stuff, they were thinking about how to progress or spin things for themselves. Meanwhile, good managers often spend lots of time thinking about how to help their people and organizations, and whether they are good or bad or whatever, it's not a topic that tends to trip them up.

For IC's, for example, you can get them to teach you something real they learned on the project they claim was a great success, ideally a thing that helped make the project successful. In my experience, this also will lead you fairly quickly to discover if they believe they are smarter than everyone else. The best people i ever found (in retrospect) were usually the ones who would teach me things they learned, but usually not things they came up with. They would teach me something they learned from someone else during the project, but was still critical to the success of the project.

Everything in an interview is, of course, fakeable with enough preparation. The above things for sure - but it is harder for people to fake approaches, fake teaching, and spin results successfully all at the same time, etc.

You start to get into the "this person is in the 99% percentile of all fakers" kind of thing that is probably not worth trying to solve ;)


> This is also why you see so many "solve this random programming problem" type interviews - they hope (wrongly) that it's less fakeable and somehow gives you an idea of how they will do in the future.

Whether they can code or not isn’t indicative of whether they can get things done. The last time I had an open req last year, the coding part was ChatGPT simple. It was for a green field initiative. I needed to be able to throw any random thing that came up - a complex deliverable - and know they could run with it - talk to the stakeholders, disambiguate the problem space, notice XYProblems, come back with a design and a proposal and learn what they needed to learn with a little direction. I needed a real “senior developer”. Not someone that “codez real gud”.

I actually turned down a “smart” candidate who was laid off from the AWS EC2 service team I think dealing with Elastic Block Storage (EC2 encompasses more than just VMs).

I knew he could code. But he didn’t show me any indication that he could deal with the ambiguity on the level I needed or the soft skills.


Agreed fully.


I don't, except for trying to contact people at the job applicant's former workplace and ask "is this really what happened"? (I wouldn't do)

I'm thinking that as long as someone finishes projects you start, it's easy to take credit for all of it from start to finish, when interviewing elsewhere?


I’m not saying lie and I never have. But when you change jobs, you control the narrative.

At your current job, your history of both successes and failures are well known, even if the failures happened early on and you learned from them. You never get a second chance to make a first impression


A rough idea of the price differences...

  Per 1k tokens        Input   |  Output
  Amazon Nova Micro: $0.000035 | $0.00014
  Amazon Nova Lite:  $0.00006  | $0.00024
  Amazon Nova Pro:   $0.0008   | $0.0032

  Claude 3.5 Sonnet: $0.003    | $0.015
  Claude 3.5 Haiku:  $0.0008   | $0.0004
  Claude 3 Opus:     $0.015    | $0.075
Source: AWS Bedrock Pricing https://aws.amazon.com/bedrock/pricing/


It’s fascinating that Amazon is investing heavily in Anthropic while simultaneously competing with them.


Amazon is a retailer and strives to offer choice, whether of books or compute services.

AWS is the golden goose. If Amazon doesn't tie up Anthropic, AWS customers who need a SOTA LLM will spend on Azure or GCP.

Think of Anthropic as the "premium" brand -- say, the Duracell of LLMs.

Nova is Amazon's march toward a house brand, Amazon Basics if you will, that minimizes the need for Duracell and slashes cost for customers.

Not to mention the potential benefits of improving Alexa, which has inexcusably languished despite popularizing AI services.

:Edited for readability


Minor nit: These days I think Ads has taken over as the golden goose, but that doesn’t diminish the contributions of AWS.


Is that why Amazon's product search is terrible? Because it's more profitable for them when I scroll through 5 pages of junk than if I can navigate immediately to the thing I want?


Yes because the sellers of that junk pay them to put it there. And most people give up and buy some of that. If it didn't work they wouldn't do it.

Cory Doctorow covered this specific phenomenon in his great article that coined the term enshittification.


It’s fascinating that Amazon Web Services have so many overlapping and competing services to achieve the same objective. Efficiency/small footprint was never their approach :D

For example, look how many different types of database they offer (many achieve the same objective but different instantiation)

https://aws.amazon.com/products/?aws-products-all.sort-by=it...


Soon AWS is going to need an LLM just to recommend what service a customer should use.


Let me tell you about Amazon Q


To quote, “right tool for right job”.


They are not competing those are offerings. "AWS has many offerings" is a completly different thing than saying they compete against each other.


As others said the product isnt the model, its the API based token usage. Happily selling whatever model you need, with easy integrations from the rest of your aws stack, is the entire point.


Has anyone found TPM/RPM limits on Nova? Either they aren't limited, or the quotas haven't been published yet: https://docs.aws.amazon.com/general/latest/gr/bedrock.html#l...


Maybe they want to gauge demand for a bit first?


I suggest you give the price per million token as seems to be the standard.


From my personal table https://i.imgur.com/WwL9XkG.png

Price is pretty good. I'm assuming 3.72 chars/tok on average though.. couldn't find that # anywhere.


I'm guessing they just copy pasted from the official docs page.


Eyeballing it, Nova seems to be 1.5 order of magnitude cheaper than Claude, at all model sizes.


You have added another zero for Haiku, its output cost is $0.004


Thanks that had confused me when I compared same to Nova Pro


You're absolutely right, apologies!


Doesn’t look particularly favourable versus deepseek and qwen. Main deepseek is about same price as smallest nova.

I guess it depends on how sensitive your data is


does anyone know performance benchmark


Is this the same reason you wouldn’t pick up trash on the floor?


Trash on the flor has a known quick solution, a broken coffee machine doesn't and the fix often needs to be coordinated or you easily call in multiple people to repair it etc.

In general if a task needs to be coordinated you shouldn't try to do it yourself, at best you should notify a coordinator but usually they are already aware of it and you are just spamming them.


Picking up trash from the floor does not require special tools, and does not void the warranty of the floor.


What kind of weird fantasy is this?


Relative speed difference is another key factor — arguably more important than absolute speed — but is more difficult to regulate.

When traffic on the highway is near a standstill you really shouldn’t fly down an open lane at the highway speed limit


The post states "they used Cap’n Proto before they hired me"


He helped build the Workers platform after they hired him.


If you're interested in learning more the term to search is "takt-time planning", specifically as it relates to lean construction methodology rather than manufacturing.

https://en.wikipedia.org/wiki/Takt_time


Knowing German I thought for sure this is the word I know from German. And it was but surprisingly through Japanese takuto taimu.


I really enjoyed this, thanks for sharing. My takeaway is don't be afraid to put an unreasonable amount of time towards something.

It reminds me of PG's essay "The Bus Ticket Theory of Genius", which is like a hack on this idea. If you're obsessively interested in something, you're bound to spend an unreasonable amount of time on it.

Taking it even farther, there's a Revisionist History episode on the song Hallelujah, who's original version took over two years to write. The two components to "experimental" genius: time and iteration.

http://www.paulgraham.com/genius.html https://www.pushkin.fm/episode/hallelujah/


My team has been working on our app for 1.5 years now and of all the technical choices we've made, React Native is probably our favorite. It's taken us through the prototyping stage and all the way to production without many issues. Granted, we're a very small engineering team of 3.

Pros

- We haven't done performance tuning and haven't had any user complaints about performance (it's a multi-channel chat app)

- Most of the time, changes "just work" on both platforms

- Javascript :D

- Development velocity is great, especially w/ UI changes

Cons

- Wish we had better text input control

- You still need someone who knows about native app development on each platform

- Upgrading versions can cause breaking issues (this has gotten better)

- Lesser used 3rd-party packages are often incomplete across platform, so a fair amount of patches

- Changes on one platform have the potential to break the other platform (so testing can require a lot of back and forth)

edit: formatting halp


I've tried making a chat application using Ionic and it's been tough.

The main issue is infinite scrolling by scrolling upward.

The user scrolls to the top, we make a note of the current top comment, load the next set of results, add them to the page.

At this point, the scrollbar is still at the top, so we need to manually scroll down to the previous top comment.

It can be a little jumpy. It certainly isn't smooth like WhatsApp etc.

Does RN have a list view that deals with this issue out of the box?


We're building a chat app in RN. Last I checked, the RN components do not support the functionality that you mention out of the box.

We are using a fork of GiftedChat which has generally been a positive but not stellar experience (https://github.com/FaridSafi/react-native-gifted-chat). I understand includes some fairly clever (perhaps hacky?) and extensive changes on top of RN's components to mimic the interactions we generally expect in a chat UI. It's been performant for the most part but is quite opinionated.

I would love to know if there's a better, more modular solution out there.


We're also using a fork of gifted chat; pretty much use it for the layout measuring and the inverted scroll view. Hoping to move to FlatList at some point!


I still cannot wrap my head around the fact that there are people willingly choosing Javascript over Swift.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: