Hacker Newsnew | past | comments | ask | show | jobs | submit | carfacts's commentslogin

Looks similar to RealtorStats.org


Good Accelerators offer this to their cohorts as a service. Techstars, which I was a part of, has a login-required site where founders post their experiences. Pretty useful to save you time with VCs that won’t participate in your round because of sector, size, competing portco


You’ll have to deal with lambda cold starts if you want it to be performant:

> When the Lambda service receives a request to run a function via the Lambda API, the service first prepares an execution environment. During this step, the service downloads the code for the function, which is stored in an internal Amazon S3 bucket (or in Amazon Elastic Container Registry if the function uses container packaging). It then creates an environment with the memory, runtime, and configuration specified. Once complete, Lambda runs any initialization code outside of the event handler before finally running the handler code.

https://aws.amazon.com/blogs/compute/operating-lambda-perfor...


It's not entirely accurate that Lambda pulls container images from ECR at start-up time. Here's me talking about what happens behind the scenes (which, in the real world, often makes things orders of magnitude faster than a full container pull): https://www.youtube.com/watch?v=A-7j0QlGwFk

But your broader point is correct. Cold starts are a challenge, but they're one that the team is constantly working on and improving. You can also help reduce cold-start time by picking languages without heavy VMs (Go, Rust, etc), but reducing work done in 'static' code, and by minimizing the size of your container image. All those things will get less important over time, but they all can have a huge impact on cold-starts now.

Another option is Lambda Provisioned concurrency, which allows you to pay a small amount to control how many sandboxes Lambda keeps warm on your behalf: https://docs.aws.amazon.com/lambda/latest/dg/provisioned-con...


Pardon the ignorance, but is the state of lambda containers considered to be single-threaded? Or can they serve requests in parallel?

If I had a Spring Java (well, Kotlin) app that processes stuff off SQS (large startup time but potentially very high parallelism), would you recommend running ECS containers and scale them up based on SQS back-pressure? Or would you package them up as Lambdas with provisioned capacity? Throughput will be fairly consistent (never zero) and occasionally bursty.


I would not use Spring, or Java for that matter, for lambdas, speaking from experience.

"Lambda containers" is a bit of a misnomer, as you can have multiple instances of a function run on the same container, it's just that initial startup time once the container shuts down that is slow (which can be somewhat avoided by a "warming" function set to trigger on a cron).

I would definitely go with containers if your intention is to use Spring. ECS containers can autoscale just the same as lambdas.

There's some work being done to package Java code to run more efficiently in serverless computing environments, but IIRC, it's not there yet.


Thanks! I wasn't planning it, but can't hurt to ask.

When I looked the Lambda API looked uncomplicated to implement (I saw an example somewhere) and it felt like you could just write a few controllers and gain the ability to run a subset of functionality in Lambda, especially if your app could be kept warm.

(to your cron comment, I thought that the reserved capacity would mean the container would be forcibly kept warm?)


Provisioned concurrency is nice, but can get pricey, especially in an autoscaling scenario. It moves you from a pay-per-usage situation to hourly fee + usage model. I would wait until your requirements show you absolutely need it. For most use cases, you will either have enough traffic to keep the lambda warm, or can incur the cost of the cold start. Warming functions did the trick for us. If you think about it provisioned concurrency is paying for managed warmers.


Spring is a one thing, Java is really another. One can use Java without reflection, and then the cold starts are really reduced. Additionally, there's a GraalVm which is optimized VM which should be even faster. On top of that, if the reflection is not used, these days one can compile Java to the native image, which has none of these problems.


When you say fast though, you really are talking in comparison to other methods of using Java on Lambda. But compared to using something like Go, they are all slow.


Each container handles requests serially. This doesn’t preclude you from spawning multiple threads in Lambda to do background work though.


Serially, but up to ten requests in a single batch

> By default, Lambda polls up to 10 messages in your queue at once and sends that batch to your function.

From https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html


I'm not an expert in this area, but have you all considered using CRIU[0] (checkpoint restore in userspace) for container-based Lambdas to allow users to snapshot their containers after most of the a language's VM (like Python) has performed its startup? Do you think this would reduce startup times?

0. https://criu.org/Docker


That's a good question!

Accelerating cold starts with checkpoint and restore is a good idea. There's been a lot of research in academia around it, and some progress in industry too. It's one of those things, though, that works really well for specific use-cases or at small scale, but take a lot of work to generalize and scale up.

For example, one challenge is making sure that random number generators (RNGs) don't ever return the same values ever after cloning (because that completely breaks GCM mode, for example). More details here: https://arxiv.org/abs/2102.12892

As for CRIU specifically, it turned out not to be the right fit for Lambda, because Lambda lets you create multiple processes, interact with the OS in various ways, store local state, and other things that CRIU doesn't model in the way we needed. It's cool stuff, though, and likely a good fit for other use-cases.


They have a feature called "provisioned concurrency" where basically one "instance" of your lambda (or however many you want to configure) stays running warm, so that it can handle requests quickly.

I know it defeats the conceptual purpose of serverless, but it's a nice workaround while cloud platforms work on mitigating the cold start problem.


That'll also cost you $$$$ and takes any provisioned lambda out of the free tier. Also note that only your specified number of instances will stay warm, meaning if your lambda needs to scale up, you risk slow cold starts on additional instances outside of the number you provisioned. You could specify the number of reserved concurrent images (limiting the number that run) but that also costs money and will eat int your quota.

Using containers for lambda is generally a bad idea for anything TypeScript/JavaScript that handles a realtime request - you just can't beat the speed of a single JavaScript file (compiled, in the case of TS). AWS CDK now ships with the NodeJSFunction as well, which makes generating those a breeze with ESBuild.


It cost me like $3 a month to get benefit from it. For what it's worth I don't use it for synchronous web requests.


I've had some pretty good luck sliming things down as well. That's a win win usually for even non-lambda cases (trying things like docker-slim / switching stuff to go that needs a quick response).

That said, the $2-5/month is fine as well for some cases.


It’s nice to have that dial. Running lambda with provisioned concurrency is still a very managed experience: much different than running a container cluster.


If cold starts are at all an issue for whatever use-case, you can just do a warming job like we do (in our case it's built into Ruby on Jets). We find invoking every 30 seconds is enough to never have a cold start. It's still quite cheap as well. The lambda portion of our bill (with tons of platform usage) is still incredibly low / low double digits.

Just doing a warming job with no other usage falls well within free tier usage, I can confirm.


This is definitely an issue especially with infrequently accessed functions but I've seen cold start issues regardless. I assume some scaling events will cause cold starts (measured in seconds).

There's a good argument to go with packaged code instead of containers if you can manage the development complication and versioning (cold starts measured in milliseconds).


My team owns a Node 14 JS lambda application that is completely serverless. We’re focused on keeping our lambdas small with single responsibilities and leverage lambda layers for anything common across multiple lambdas. Cold starts are a concern, but is very negligible (< 100ms tops) and unnoticed by our client apps. We host all of our static web and JS assets via Cloud Front so they load quickly. If a user happened to visit our site when a lambda incurred a cold start, it’s not perceptible. We were much more worried initially about cold starts than it turned out we needed to be. Keeping lambdas small, picking a good language for the job, and leveraging lambda layers help minimize this a lot.


https://www.realtorstats.org

Organizes data from real estate sales to help find an agent that will maximize your selling price. Eg, who will beat the Zestimate in Echo Park? I started it to help me find an agent to sell my house in LA and then decided to expand it across Los Angeles area neighbourhoods.

Also got me using Next JS, and deploying on Vercel.


This format was discussed in a HN first page post just this week:

> This alphabet, 0123456789ABCDEFGHJKMNPQRSTVWXYZ, is Douglas Crockford's Base32, chosen for human readability and being able to call it out over a phone if required.

https://news.ycombinator.com/item?id=29794186


People in my industry in my country has specifically avoid using B and D together as they sound too similar over the phone.

Also 2 and Z can be similar in writing.

However it is nice to not see 0 and O, 1,I,l in the same string.


F and S sound similar over the phone, at least on POTS landlines, as they don't carry the higher frequencies (> 4 kHz) that distinguish the S from the F. Note that cat names tend to have S sounds.

POTS = Plain old telephony service is restricted to a narrow frequency range of 300–3,300 Hz, called the voiceband, which is much less than the human hearing range of 20–20,000 Hz [from https://en.wikipedia.org/wiki/Plain_old_telephone_service ]


Anybody who has to relay things like API or CD keys over a POTS line on a regular basis quickly learns the NATO phonetic alphabet.


If you're worried about clarity over the phone, you should look into the NATO phonetic alphabet: https://en.wikipedia.org/wiki/NATO_phonetic_alphabet


I prefer to use Aeon, Bdellium, Czar, Djinn, Eye, etc.


The bomb defusal scene in Archer was an absolute classic for this. https://youtu.be/_4jxLxZrMfs


> Djinn

Fun fact: dzs counts as a single letter in Hungarian (e.g. in alphabetical ordering).


Quite a challenge for non-English crowd.


You still have to know that 0 is 0 and not O, and that 1 is 1 and not I or l.


but if mistake is made, and you wrote down L instead of 1, and sent me in a e-mail. I, knowing that it is crockford 32, would easily deduce what mistake was made.


Right, I didn't realize the decoder is specified to be lenient in that way, so the confounded characters are actually equivalent in the encoding.


Yeah when he lays out the arguments for it in the book you can clearly see why it makes a huge amount of sense. The usability, the performance, the value of a checksum etc…


Looking for that quote I can't find it on that page.


Lots of realtors do it in Los Angeles, casual browsing of neighbourhood sales data on sites like RealtorStats show how common it is


This is why often when setting guidelines around particular behavior, standards that require some judgement may be better than rules that are to be followed blindly without any discretion.

Compare the rules around speed limits, don’t drive above 70mph (no judgement, could be too slow or too quick in a given situation) vs the one often observed and followed in reality, drive at a reasonable speed, roughly what others are driving at (use your judgement about what’s safe).


Was in similar situation, though Australian. I got a letter from an education/certification expertly that said my past experience as dev and education (also had a BA) was equivalent to a BS. Recommend you get an attorney/Peter to help you


A bachelor's degree in a related field based on an evaluation of education and experience can work in the H-1B context but is not acceptable in the TN context where a bachelor's degree is required.


Can a master in the field replace the bachelor? Eg. I have a bachelor in business, going through a master in CS, am I eligible for TN visa as Software Engineer or Computer Systems Analyst?


I’m a founder that went through Techstars, not YC so maybe not exactly similar but accelerators want you to wait to Demo Day to have any investor talks while VCs want to get in early. Much like experienced used goods resellers will attempt to pick through yard sales early - no competition leads to investor friendlier terms. So an investor telling YC candidates not to wait to have these conversations isn’t exactly news. (Question whether waiting makes sense, but these are the dynamics.)


It's not binary. My impression is that most YC companies aren't raising priced rounds at/around Demo Day; these are syndicated seed rounds. They mostly don't have "lead investors". They can happen incrementally over the course of a month.

I'm sure there are companies that have all their round capacity snapped up by single firms, but I don't think that's the norm; I think a bunch of random angels is much closer to the norm.

Which suggests that the "yard sale" phenomenon you're talking about here is mostly not a real thing.


Asking for relief in a vague generalized form is the general practice in larger complex cases where the relief the claimant would be entitled to depends on the outcome of and findings associated with various claims made. As these become known, parties will be given an opportunity to be more specific about the relief they think is appropriate.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: