Hacker Newsnew | past | comments | ask | show | jobs | submit | prayze's commentslogin

I can see why, it's very good.


Working on it! Though the concept is new and fledgling. Free non-profit arts publications aimed at giving exposure to featured emerging artists like your example. Donor funded and aimed at not extracting money from the artist themselves.


It took so long to get an invite code I unfortunately lost my interest.


Did this suddenly get changed? Nothing but "# ,: # ,' | # / : # --' / # \/ />/ # /" is shown now.


It's just your browser's HTML parser. Line 6:

  #                         / <//_\
This is being interpreted as a malformed HTML closing tag, which (according to the HTML5 parsing algorithm published by WHATWG) gets treated as a comment. The file doesn't contain any > past this point. This leaves the uncommented contents from lines 1–6:

  #                               ,:
  #                             ,' |
  #                            /   :
  #                         --'   /
  #                         \/ />/
  #                         /
Or, with whitespace collapsed:

  # ,: # ,' | # / : # --' / # \/ />/ # /
Which should be exactly what you observe.

Ref: https://html.spec.whatwg.org/multipage/parsing.html https://developer.mozilla.org/en-US/docs/Web/CSS/white-space...


Weird. I think it did change. Google cache shows a 2229 line file: https://webcache.googleusercontent.com/search?q=cache%3Ahttp...


Seems it might be looking at the referrer. Loading https://www.shopify.com/robots.txt from clicking the link shows the weird line while opening it in a private browser window shows the right one.


For some reason, "view source" gets the right list. Maybe a referer issue like someone else said.


I have to echo your same sentiments. I swore off them a year ago when I provided support with step-by-step screenshots of how to produce my exact error, and even the error itself. The support told me it wasn't an issue and closed it.


A friend and coworker of mine went to work for shopify in support. He was the engine behind two amazing support teams at the companies we worked at, and he was so excited and optimistic to take on that task at shopify.

He’s so over it now. He hasn’t been affected by layoffs but deeply wished he was, because it would give him the financial mobility to make a change. He has expressed that the support they offer has deteriorated dramatically since he started, and I’ve wondered how much that’s his own dissatisfaction vs reality… But what I’m reading suggests he’s probably on point. It seems pretty bad.


I get a 404 from this link



I've always been curious about this. What's the best practice for loading a large JSON file for large sets of search results? I believe when working with lunr in the past, I ended up making large network requests to load the entire JSON file at once. What's the proper way to deal with this?


Once your website reaches a certain size, the JSON will be too big to load. Then you'll have to offload the search request to a server. Either self-hosted, or a service like Algolia.


You can probably push that "certain size" a long way down the road if you need to. (at least in this specific client side text search case, rather than a generic "how do I server large json files" way)

If you tweak the search box so it doesn't do anything until you've typed at least 2 or 3 letters, you could then serve regenerated son files that only contain that matches that have that prefix... No need for any of the json payload for words/phrases starting with aa..rt if someone's typed "ru" into the search box.

That means you'd have 676 distinct json files, but you'd only ever load the one you need...


But that requires a network request after typing to get results, which is about the same user experience of a search bar that requests some search API.


> requires a network request

Seems to me that often, though not always, this network request would happen whilst the user is typing — say, busy typing chars 3, 4 and 5. That the netw req won't be noticeable for the human

And, if typing more chars, or backspace-deleting to fix a typo ... no netw request required.

And the same scenario, if typing a 2nd word.

I'm guessing in like 90% of the cases, it'll seem as if there was never a netw req.


True, but it'd still allow your site to be a collection of static files instead of needing a executable running on a backend somewhere.


Push the corpus into SQLite, it has built-in FTS engines[^1]. Then serve it with anything. Unfortunately this needs server side code, but like 30 lines of PHP.

[^1]: https://www.sqlite.org/fts5.html


You can do SQLite in the browser but it’ll have to download the entire dB file instead of only opening the pages it needs (because the naive JS port can’t convert page requests to range requests).


It should be possible to support loading only the required pages on the browser with SQLite compiled to WASM along with a custom VFS implementation. Here’s a project[1] which does something similar (selectively load the SQLite DB on demand), albeit with the SQLite file in a torrent.

[1]: https://github.com/bittorrent/sqltorrent


Wow this is great, thank you for the tip! My site uses SQLite3 for storage so this is perfect.

I'm continually amazed at how featureful SQLite is.


Did anyone try to use IndexedDB instead of a big load of json with a separated json approach? I mean a bunch of static json files, that are only loaded if not already in indexeddb... like patches?

https://gist.github.com/inexorabletash/a279f03ab5610817c0540...


JSON may be a very bandwidth-inefficient format. A format that can be parsed from a stream could save RAM and bandwidth, especially on mobile which is most constrained.


However it compresses very well on the wire. One of the simplest "streamed-json" solutions is to have one json per line of input and parse that incrementally.


One thing you can do is make the data itself more concise:

- Stripping all whitespace from formatted JSON can make a huge difference

- Making property names shorter (these get repeated for every element in a large dataset!)

- If your data is relatively flat, you could replace an array of objects with an array of arrays

Or you could go all the way and serve data in CSV format, which is very space-efficient and has the neat property of being tolerant of being broken up into pieces. Though it may not parse as quickly, since JSON has native parsing support from the browser.


JSON can be compressed very well and while your advices are very good, once you compress it's not very important anymore.


Sometimes you can partition large sets into multiple ones. If you add this extra level of indirection, you don't need to retrieve all data, just the data that is being used, which should be small fraction for some problems that work well with it. But many problems can't be solved by this approach.


I guess you can always load only the index in the browser, isn't it?


A dream come true. Something I've been looking for, for a long time now. Thank you for sharing this


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: