Hacker Newsnew | past | comments | ask | show | jobs | submit | paozac's commentslogin

Very nice, but on an A4 sheet the last row is cut off and you need to manually shrink it


Mazda owner here, love the physical controls but using Google/Apple maps with the joystick is painful


Trailers give away the entire film these days. No surprises allowed


I've seen some movies not knowing anything about them by avoiding trailer (this being much easier in the 1990s...) Movies seems to work better that way.

Though it can be jarring: Eg. Silence of the Lambs or Leaving Las Vegas.

The Onion had a good take on it:

https://theonion.com/wildly-popular-iron-man-trailer-to-be-a...


I saw “The Menu” and “Palm Springs” having never even heard of them. I would highly recommend both, especially if you know nothing about them.


Watch Society. Just watch it. Don't Google it!


I always do this, as far as possible.

I avoid trailers like the plague. When reading reviews I'll skip most of it. I want to know the gist of the plot, and I want to know the summary.

I enjoy movies so much more this way. Sure, sometimes I end up watching some duds, but most of the time I'm really engrossed and I love the surprises.

If I watch a trailer, especially the modern 5-minute condensed versions, I find it takes away >90% of the excitement. Doesn't matter if the movie comes out next year, the trailer will come back to me and I will recall the spoiled plot points.


I try to see most movies that way.

You can pick movies by looking up the film in wikipedia and immediately jumping to "critical response" without reading anything else.

(though I should have paid MUCH more attention with megalopolis)


I heard once that this is because the creators of the trailers are separate entities from the movie studio. Their job is to sell the movie. They don't care if they have to spoil the whole movie to get you to buy a ticket to see it.


I think some movies are so crap that they've only got 2 min of good material, so that's what they put in the trailer.


The editors of the trailers should be editors for the movies themselves!


And then you have trailers like Cloud Atlas which are 5 minutes long and don't spoil a single thing about the film https://www.youtube.com/watch?v=hWnAqFyaQ5s


That's pretty impressive.

Haven't seen the film, but read the book. I got what was being depicted (mostly), but yeah, it shows without revealing.

It helps that the story doesn't revolve strictly around action and combat, which many blockbusters do these days.


I can usually tell within the first third of a trailer whether I'd like to watch it. In those cases I don't finish the trailer. They give everything away.


It’s been like that for my whole adult life (20 years). When I used to go to theaters, I would wait outside until the trailers stopped.

Knowing nothing about what I’m about to watch is my favorite way.


One thing I always wished was possible is to see The Worlds End without any pre-knowledge. The trailer completely gave away the premise.


I saw that without any spoilers and it was one of the weirdest movie experiences I’ve ever had.


I have never seen it or the trailer. Thank you for the recommendation!


The worst are the trailers for really bad movies. I recall one where the trailer was literally the only good scenes in the movie.


Yeah. Sometimes I've wanted to check out a new film on Netflix. I watch the tiny trailer that runs in the app, while I figure out whether it is worth my time.

Trailer ends, and I know all I need to know about the film. The plot is known, the story is more or less obvious. Pick another film, repeat, same thing.

Result: do something else entirely, or watch comfort series like Star Trek, where it doesn't matter that I remember the plotlines.


I’ve always wondered why they didn’t use the Fate of Atlantis script instead of the awful crystal skull


Same. It is arguably a paint by numbers Indy romp. Which of course the 2 newer films are not.


I’ve often pondered this and I have concluded that there are a number of factors:

1. Licensing. While it might appear that the Indiana Jones IP is fully in the hands of Disney, it’s plausible that there are nuances behind the scenes. For example, Paramount owns the distribution rights to the first four films. That’s a different issue, but I mention it as an example of how rights can become fragmented over time over multiple contracts. I wouldn’t be at all surprised if the rights to Fate of Atlantis are more complex than we see from the outside. Having said that, I’ve no evidence of this.

2. It was a good game, but does that make for a good movie? FoA had some great game features: multiple paths, fun puzzles, classic adventure game exploration and interactivity, awesome pixel art - but none of these are useful to a movie.

3. The script was… OK. It doesn’t have any standout quotable lines like the first three films did. Its comedy is more slapstick than witty, and there are more than a few video-game in-jokes. It’s also highly derivative, written, I suspect, to make players think “This is cinematic! This is just like a real Indiana Jones movie!” To that end, it worked brilliantly. But in the 90s, games were still a sideshow compared to Hollywood, and were written in a parodic style, not yet having found their own space as a unique medium. To an extent, this is still true today.

4. An aging Harrison Ford is tough to reconcile with a story set in the 1930s. I expect this alone was a big driver in the studio wanting new scripts set in the 50s and 60s for the last two films.

5. Movies based on games have an abysmal track record, and I’m sure Hollywood producers get an allergic reaction whenever someone suggests another one.

And yet, despite all the above, FoA is still fantastic raw material for a movie. The plot, mythology, locations, set-pieces, and even the music, are all perfect for being reimagined on the big screen. Which leads me to the final reason why I think it hasn’t happened:

5. Nobody took games seriously in the 90s, Lucas was in control in the 00s for Crystal Skull and wanted his own story not somebody else’s, Disney was distracted by Star Wars and Marvel in the 10s, and by the time we get to Dial of Destiny in the 20s, Disney has become so risk averse that the idea of pulling ideas from a video game seems far outside what they are creatively capable of. Sadly, with Dial of Destiny bombing, the franchise is probably dead until the 30s.

Still, the fact that Disney lawyers shut down the recent fan-remastered version of the FoA game gives me hope that they still recognize how much value that IP has. They’ve strip-mined every other IP, so it’s possible they will eventually realize they've been digging in the wrong place, let go of Harrison Ford (in either live, de-aged, or posthumous generative form), and get their top men working on it.


He's been the most polarizing figure of italian politics of the last 30 years. It will take decades to reach a balanced assessment about the guy. IMO future historians will not be kind to him.


You can find them for many companies here:

https://github.com/OpenTermsArchive/contrib-versions


I used to like Postman, when it was a simple browser extension. I basically used it as a curl gui. Now

> Postman is an API platform for building and using APIs. Postman simplifies each step of the API lifecycle and streamlines collaboration so you can create better APIs—faster.

I guess I’m no longer their target user. Back to curl/httpie.


I used to like Postman until it locked me out of my locally-stored data while I was offline (testing my own applications on localhost) for an extended period , I found out that it phones home constantly (cannot be disabled) & locks you out if it fails. The developers stating they had no intention of changing this for "security reasons" before closing the Github issue sealed the deal.


This sounds unbelievable but it seems to be true? https://github.com/postmanlabs/postman-app-support/issues/10...



I don’t see anything in this issue that I’d characterize as

> The developers stating they had no intention of changing this for "security reasons" before closing the Github issue sealed the deal.

The issue is still open with no response from the developers that mentions security reasons.


There have been many tickets opened (and closed) on this problem. Searching their issues for the keyword "offline" should bring up some of them. This one is just the latest.


Its in issue in the sibling comment. The suggested work-around is to not sign in at all.


For further context: I'm not sure if this has changed but at that time signing in seemed* to be required (there's some discussions in the ticket about confusion in the UI about this so in retrospect UX dark patterns were likely at play). However, while it may have been possible to make ad hoc requests without an account, it didn't seem to be possible to save request collections locally without sign in.


That is incorrect. You can save requests inside Collections in the Scratchpad and send requests without an account.


Not sure what I said that's incorrect?

The original issues were from 2015 & 2017 - Scratchpad was only added to docs in Jun 2021 (before that was an undocumented feature for likely under 2 years I would guess).

Also, on the suitability of Scratchpad as a workaround for this bug, as quoted from person who created the linked issue:

> and no, scratchpad is not a solution to this.

The same sentiment is echoed through many of the more recent closed issues created in Github on this topic.

Either way: my comment above was mainly about dark patterns, which makes the existence of a workaround (not matter how suitable) somewhat moot. Even if this issue gets fixed "properly", the attitude of their devs over this long a period of time has been more than enough to turn me off using their software.


Scratchpad was specifically added as a feature to clearly allow people to use it Postman in an offline mode because otherwise people were confused about the distinction between an offline/online mode (Why is my content not showing up on Postman's web version for example). There are some features that just can't be developed with local storage as the only option.

An exceptionally large majority of users that give us feedback are pretty happy with the collaborative online features that we provide through Workspaces.

Unfortunately, a small but vocal minority is insistent that all those things be not developed because they have built their own workarounds through patterns from decades ago (CLIs, editors, repositories). I just don't agree with the sentiment that progress towards making lives easier for others should be stopped for a narrow viewpoint to be met. I also understand that it will lead to alternatives but so far almost everything that I have seen in the market has been a clone of our feature set - open source or closed source. I am happy to see people compete with new ideas.


> There are some features that just can't be developed with local storage as the only option

This is disingenuous. Firstly, no-one is asking for this "as the only option"; they're asking for it as an (exclusive) option. Secondly, there are no features being asked for here that can't be developed with local storage as an option - in fact, the default is for local storage to be the only option. In most apps, the approach is to add sync as an extra, not as the required default.

Scratchpad seems a perfect representation of the developers' motivations actually: it's both a demonstration that the feature is possible but deliberately implemented as a non-default side-feature without integration with the app's main workflows, to discourage use. So the devs can give it as a "solution" while continuing to pervasively track the bulk of their userbase.


It is also buggy, resource hog and instable. I use it on Ubuntu and os x m1 and I often have to kill it because it stops accepting any inputs or it ate all memory (and cpu after that when you click anything). Hoppscotsch and others are better now: I guess they wanted way too much too fast (I did not check but I suppose they got VC money?)?


Can confirm. Unfortunately, Postman is too resource-hungry (on Ubuntu). Launching is taking a while as well.

I have to admit I'm quite surprised that VS Code (which is also an electron app) is relatively fast and resource-sensitive. Having open a few applications in my daily workflow, moderate resource consumption is getting an important selling point for me.


Incidently, httpie is getting a GUI: https://github.com/httpie/desktop


And VC funding!


Oh, that's really unfortunate, I really like httpie. I wonder how long time it'll take for them to end up trying to extract as much value from each user as possible...


HTTPie founder here. These are valid concerns. There are two parts to this: 1/ What happens with HTTPie for Terminal, and 2/ how HTTPie for Web & Desktop and the overall platform will look like.

1/ HTTPie for Terminal will always be open-source and obviously free. The difference is that now we’re able to pay a talented developer to work on the project full-time (the recent 3.0 release is a result of that).

2/ We’re building a new platform with the same principles that made HTTPie for Terminal successful in mind: uncompromising simplicity, focus on productivity, and delightful user experience. We’re in the same space as Postman, but the idea is to be anything like. We’re striving to become what Linear is to Jira, Vercel to AWS, Figma to Adobe, etc. That is, to offer a much simpler and more focused product. Premium services for companies will be a natural extension of the single-player mode, and all incentives will be aligned in a way that doesn’t cannibalize the core experience.


> I wonder how long time it'll take for them to end up trying to extract as much value from each user as possible...

As long as it'll take for the investment documents to be signed.


Source? I really hope this isn't the case. Fucking capitalists kill everything.


A lot depends on the founders. There's plenty VC funded, OSS-originated businesses that don't suck. GitLab comes to mind. There's no reason to immediately assume httpie is going to be the Postman story repeated, although I agree that with VC funding it's more likely than without.


https://httpie.io/about says they've had a seed round already. Looked it up because that sounds… well it is what it is.


I'm building a desktop app that lets you query HTTP APIs but also databases and files. So definitely something you can use as a simple curl GUI. The big benefit of this tool though is that you can script and graph results as well.

Always happy for any feedback!

https://github.com/multiprocessio/datastation


Paw[0] is a pretty good native macOS option, at least for now. They were acquired sometime last year by RapidAPI[1], and since have released electron based versions of their app for Linux and Windows.

I’m really hoping they don’t go the 1Password route and kill their native macOS product to move everyone to the cross-platform one.

[0] https://paw.cloud/ [1] https://rapidapi.com/


No, we are not gonna kill native macOS app![0]

[0] I'm lead developer for macOS app :)


We own licenses for our developers too, and plan on buying more. Please, please, please, don't screw us over and change course later on. We really like Paw, and not being based on Electron is the major selling point :)


It seems unrealistic long term for any company to maintain one native app and an electron app across other platforms. Spotify did this for a while, but they eventually forced everyone onto the electron app. Something to keep in mind.


> It seems unrealistic long term for any company to maintain one native app and an electron app across other platforms

Not sure if it's more or less unrealistic to have one native app per platform.

> Spotify did this for a while, but they eventually forced everyone onto the electron app

I don't think (but someone correct me if I'm wrong please) Spotify has ever been a Electron app. If I recall correctly they are indeed embedding Chromium but they are doing their own custom binding (possibly via CEF), not via Electron.


Often apps don't have one app per platform, often it's just Mac or something. They get Electron so everyone can use it and then suddenly the Mac app has an equal number of users as Electron (or less) and then at that point justifying development becomes difficult.


Really hoping the native app survives. Thanks for the great app! I’ve been a user for a long time and hope to remain so.


I wonder how much of that was driven by the silicon valley VC culture. You won't get funding if you don't show growth and "innovation", which means catering to all kinds of users resulting into a bloated product with lots of bells and whistles.


The UX has gotten noticeably worse for me. Maybe I'm using it wrong, but now you have to setup a project and name things before you can actually start issuing requests.

And I found the whole experience a bit confusing in terms of user flow


It really needs some love in the UX department. There are so many icons and toolbars it is hard to understand where to find things or why they are placed where they are.


New law: Any novel product with VC funding will inevitably grow a vestigial CMS.


I used to use Postman but now I prefer to just build my own scripts in Python. I use the requests library and can setup things however I want.


Same here. I am not sure where would I even use postman for. I essentially would wait 3-5 minutes to have postman initialized, be greeted with a dialogue box for an update or something, drop a json file for the headers and skim through the output.

But it takes seconds to get up and running with requests-html. And it can do anything Postman can do and more. I have no idea how people in organizations use postman though.


> I am not sure where would I even use postman for.

It's really handy for generating test suites to hand to people who don't necessarily have the skills to write Python / node / whatever code. Have worked at places where certain changes needed a Postman collection alongside for people to manually verify that it works.

(Also handy for un-coder people to make test suites, obvs.)

(Also handy as a quick-and-dirty "view this data via the API" when you don't yet have a web UI etc.)


I've worked in a place where test suites started in postman because of the lack of skill in QA.

For us, the fact that you end up writing "code" in postman meant it had a learning curve anyway, so it was a really short sighted win.

For everyone smart enough to write postman code, especially postman code that leverages the scripts and storing variables, a simple test project set up by a dev is going to be very worthwhile. Postman doesn't have a linter or compiler, and it doesn't enable easy viewing of changes in source control because it's just one big json file.


Same story here. Postman just got too feature-rich for my blood.

curl + bloomRPC + graphiQL covers all my bases nowadays.


The latest updates from httpie have an insomnia type rest client workspace thing.

https://httpie.io/product


This is some invite only software, am I missing something? I couldn't use it right away, I was asked to join waitlist.


HTTPie for Web & Desktop is in private beta. We’re shipping updates weekly and inviting people from the waitlist every day. As soon as we’ve tackled the few remaining things on our roadmap and polished some rough edges, it’ll become publicly available.


> I basically used it as a curl gui.

I just use Browser's Network tab for that nowadays. CORS can be a trouble at times, but that can be avoided with a few tweaks.


I think the killer feature for these separate tools is usually that you can easily do a right click -> "copy as curl" in the network inspector, then import it in Postman/Paw and then tweak parameters / add headers there. This is not really possible in the browser network tools.


It seems pretty easy with Firefox - there's "Edit and resend" in the context menu of every request.


And then you accidentally refresh or close the tab and everything is gone. I usually use specific requests over many days if I'm reverse engineering something so having these available, sorted in folders for me is important. Of course for other uses cases it might be fine to have them live in the browser.


Se above. On MS Edge chromium you can enable to save the requests and even saved environments


I take that curl command to https://curlconverter.com

And get Python that I can start iterating on. They have lots of languages.

I used to use Postman but the clarity of the code is so much easier to see what’s happening vs postman imo.


On the on Microsoft Edge chromium you can can enable an "edit and resend" feature save the requests to "Collections" and create request environments.


I wonder if a such a feature can be added in Chrome/Firefox with an extension.


I’ve found Hurl to be quite usable.

https://hurl.dev/


I found SoapUI when I had to develop some SOAP services, but these days it also does REST etc just fine.

For someone like me who just does this occasionally I found it rather useful.

[1]: https://www.soapui.org


I just use the HTTP Client from IntelliJ/Rider. It's text only, you can use variables (for Auth for example), and I can copy/paste queries from Fiddler Classic/my browser or to colleagues.


If you're on macOS, Auxl (https://auxl.io) is another option to try. Support for gRPC should be coming soon. Disclosure: I am the author.


Try Insomnia or Hoppskotch.


How is Insomnia any different? It's basically an OSS carbon copy of Postman.


I switched to Insomnia about a year ago for two main reasons:

- Didn't choke when having ~50 request 'tabs' open

- Didn't try to sell me shit

Granted, Postman had quite a lot more tools in its box for scripting, testing, sharing etc. but I didn't need those.

Insomnia has got a bit fatter since then, but it remains more responsive than Postman was.


You asked how it's different then stated how it's different.

Because it's OSS they didn't feel the need to bog down a perfectly functional product to drive a valuation up.

It's a carbon copy of what Postman started as.


That's what I want. A local-only Postman without too many features or configuration overhead. Insomnia is almost perfect for that.



> when it was a simple browser extension

You can download packages of extensions. I would recommend that you do it for the ones you love.

But the browser APIs are constantly changing, forcing you to keep running in order not to fall behind.


50TB is not so big these days. I read that in 2008 (!) Yahoo had a 2+ PB PG database. What is the largest you know of, 14 years later?


50TB is big. Bigger is possible I'm sure, but I'd guess 99.something% of all PG databases are less than 50TB.

If someone here commented they had a 2PB database, I guarantee someone else here would be like "pfft, that's not big"...


The OP message could have better said that 50TB databases are common these days when single metal or 24xl I3en or I4* instance on AWS can hold 60T raw.


it's more than big enough to cause big problems / risk days of downtime to change, yea. 50GB is not big. 50TB is at least touching big - you can do it on one physical machine if needed, but it's the sort of scale that benefits from bigger-system architecture. 50PB would be world-class big, hitting exciting new problems every time they do something.


With 50TB, and if you were doing a full text search, wouldn't the entirety of the index have to be held in memory?


No. Full-text indexes exist.


You can also do an incremental/streaming search. Lots of ways to avoid loading it all into memory at once, yeah.


Around ~2005 I took a tour of the [a well known government organization] and they were bragging about several-PB-sized databases at the time. Interestingly, there was a TON of server racks there in a bomb-proof building with tons of security, and they were all IBM servers (a supercomputer maybe?), if I remember correctly. Also, there was one small server rack that was painted differently from the rest (it looked like something made in-house), and we asked what it was, and the tour guide (a PhD computer scientist) said that technically it doesn't exist and he couldn't talk about it even though it was super cool. Now that I know what they were doing around that time (and probably still today) I am kinda scared at the implications of that tour guide's statement and what that one tiny rack was for. I'm glad I never went to work in their organization, since that tour was meant to recruit some of us a few years down the road.


This comment contains no information other than an ego boost for yourself, AFAICT.


I need every ego boost I can get these days, friend. Either way, I was intending to tell a story directly relevant to the OP about how there were very large databases even back then. Interestingly, the same size databases are probably run on much less hardware today.


Was that a three letter US government agency?


How are people dealing with databases this large? At work we have a mysql db with a table that has 130M records in it and a count(*) on the table takes 100 seconds. Anything but a simple look up by id is almost unworkable. I assumed this was normal because its too big. But am I missing something here? Are SQL databases capable of actually working fast at 50TB?


count(*) is always going to be slow. They don't store the number of live tuples, just an estimate so it's a full table scan. The secret is to use indexes to get down to a small bit that you care about. If you're filtering on 3 columns, the goal is to get the index to wipe out at least half the results you don't care about and so on and so forth.

A 130M record table with no indexes is going to be crazy slow. Although if all you need are primary key updates, then that's the way to go.


Even at the 130M rows range, you should still be able to take advantage of indexes for fast queries beyond just the primary key. It's been a while since I used mysql, but around 2010 I was working on mysql 5.something and we had several >100M row tables that could still serve indexed queries very quickly (sub ms, or couple ms, iirc). If you are not able to do this, I suggest looking into mysql config and adding/tuning indexes. But yes count(*) will be slow, I'm not aware of good workarounds for that other than caching or using table stats with postgres (if you don't need perfect accuracy) - not sure if mysql supports similar.


It depends on the queries you run. In postgres we use stuff like materialized views, partial indexes, hyperloglog and it you are using citusdb (postgres for adults), you can even have columnar tables to accelerate olap stuff


Security and incident response systems ingesting log files from other systems can get big, add in ‘must store for $x years’ compliance fuzz and you might hit some big numbers


Was it a single server?


Not sure about this, I wouldn't like having to look up the FK name every time or hope it was named following the convention.

The first thing (among many others) that I would change in SQL is the position of the SELECT clause:

FROM .. JOIN .. SELECT .. WHERE ..

instead of

SELECT .. FROM .. JOIN .. WHERE ..

That would make the construction of many queries more natural.


Never thought about the order, but now when you mention it, I realize I always write

    SELECT
    FROM
and start writing the FROM clause first.

So if we could rewrite history, I agree the order you suggest would make more sense, but it’s probably unrealistic to change it, except for entirely new SQL inspired languages.

To comment on having to look up foreign keys. The idea I had in mind is to allow changing the default formatting of foreign key names, so you could figure out the name, except in special cases such as if there would be two foreign keys referencing the same table. For such cases you should explicitly name the foreign keys.


Following the granularity gradient

FROM .. JOIN .. WHERE .. SELECT

tables -> columns -> rows -> filter

I think they knew that but decided they wanted DBAs to think about what they wanted to end up with before they began writing code, so they moved the last step to the first step. I can get behind that from a pedagogical perspective.

However today, relational databases are far too ubiquitous to always get the level of serious thought and considerations they were once afforded, and are regularly being used by (perish the thought) non-DBAs.

At this point it is muscle memory to start with:

   select count(*)
     from ..
     join .. on ..
 
then comment out the `--count(*)` and build up the output after the body works because I may not know what is available to select before isolating it.


Those were the times where "natural language like" was considered good and useful for use by non experts. It's one of the reasons why the sql syntax is so complex and, I believe, the reason why IBM did go with that order


This is effectively what CTE style syntax gives you. I always find them more intuitive for intermediate to advanced queries for exactly this reason. They’re also much easier for another reader to later come in and deduce what the query is doing.


CTE's are indeed nice, but can also make your queries substantially slower, at least in Postgres


That's mainly true because you don't have the benefit of an index when you join on a CTE, isn't it?


It'a slow even without indexes. In other cases, where the CTE is used multiple times in the same query, it can be faster


A JOIN is simply a special predicate.

    FROM A ... FROM B ... SELECT ... OUTER JOIN A.Foo = B.Bar ... WHERE ...
That also happens to incredibly compatible with code completion.


The issue here is that FROM, WHERE, etc., are all optional, where the operation (SELECT in this case) is not.

If it were a proposal for a SQL21 standard to offer this as an optional method for query processing, I'd be all for it.

However, the idea of "non-optional followed by optional" came from an era where that sort of thing mattered and it made sense.


This is a good use case for an AST-based query compiler which would allow construction of clauses in arbitrary order. Using HoneySQL in Clojure, I frequently would write:

  (-> (from [:cities :c])
      (join [:population :c] [:= :c.name :p.city-name])
      (select :c.name :c.pizza-rank :p.population)
      (order-by [:c.pizza-rank :asc]))
It's handy for a lot of reasons, but unless I had mostly static queries (changing just some where clause params), I would always seek out a AST library rather than attempt string building for a SQL use case.


I'm sure they've put some thought in the pricing strategy, like with the infamous $999 monitor stand. If you think $19 is ridiculous for a cloth (I do) then you aren't the target buyer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: