Hacker News new | past | comments | ask | show | jobs | submit | davidzweig's comments login

I bought a sit-on electric Chinese cargo scooter from an importer in Bulgaria because I liked how it looked, and it's handy for going around town. They are everywhere in China. They are assembled by hundreds (probably thousands) of small factories. A big center is Wuxi. I visited a few factories there last year. The scooters cost about $350 USD in China.

There seems to be a sort of loose standardization around components, that are also churned out by countless factories. Wheels/motors are generally 10" or 12" size. Brake disks have either a certain 3 or 4 bolt pattern. I'm sure the electrics likewise follow some patterns.

I had the thought that it would be fun to 3d scan parts, measure and document some of these scooters, and make a database of suppliers. Then, who knows, start to make the frames and assemble in Bulgaria.

But you really need to be spending time in China to do that, and it wasn't really the right time for me then.

Another thing to note: branded scooters (niu etc.) are becoming more popular, better quality, extensive plastics, and you can probably order parts from the manufacturer.


Any tips for models that can restore speech quality on aged cassette recordings (often the higher frequencies are partially lost)? There's different approaches to noise reduction, and also audio super-resolution, but I didn't find anything geared towards old tapes specifically.


There's a piece of commercial software called Celemony Capstan. Might not be right for your application but it was designed explicitly for tape restoration.


I'll drop this here: If anyone wants to work on Language Reactor (well compensated), my email is in my profile. I'm planning to start open-sourcing much of it soon.


I use Firebase SDK with the FirebaseUI library on languagereactor.com. When users access the /login page, if the extension is detected, the signInSuccessWithAuthResult callback triggers getFirebaseSignInToken to obtain a custom token. This token is then passed to the Firebase SDK running in the extension’s background worker via messaging, where signInWithCustomToken() is called. The SDK in the background worker has an onAuthStateChanged() callback that notifies any listening tabs when the authentication state changes.

However, some users had been reporting issues related to third-party cookies and a few other minor problems. Recently, oeffectively running a 'DROP TABLE' on 400GB of Firestore data ended up costing $2,000.

I'm looking for an auth replacement. 2 million users, mostly free users. The system needs to support Google sign-in and email authentication, possible to integrate with React Native Expo, ability to issue API keys (thats probably separate). No vendor lock-in, under $500/month, happy to self-host. Any recommendations appreciated.


    > However, some users had been reporting issues related to third-party cookies and a few other minor problems. Recently, oeffectively running a 'DROP TABLE' on 400GB of Firestore data ended up costing $2,000
This doesn't sound like an issue with Firebase Auth per se. You can still use the auth and move your storage to some other mechanism (one friend working on another project is using Firbase Auth with Supabase backend because he couldn't get Supabase auth to work with Claude generating most of his code).

In your case, depending on the document size vs number of documents, it might have been more economical to queue the deletions so that each day, you use exactly up to the free limit (20k deletes per day) and delete it over a number of days if there were no other constraints.


I have Nreal Air glasses (they changed name?), they aren't useable for programming really, image is too soft, but neat for watching Netflix on the train etc.


Also many Iranians speak 'Turk' and understand Turkish from watching Turkish telenovelas on satellite TV.


More than half of the Iranian population belong to an ethnic minority, and the biggest by far is the Turkish speaking one, the Azeri - it's something like 1/3 of the population as far as I remember. I don't know if non-Azeri learn Turkish from TV, but for a lot of Iranians it's simply their first language.


Good point! Although, what they call 'Turk' is actually Azeri :) (Part of the historical Azerbaijan is in Iran.)


How very interesting, I understand Turkish and Farsi are totally unrelated (Turkic and Indo-European language families).


Tried the demo, looks similar to the Rosetta Stone method. Very nice execution. We also make a language app.


Thank you! I appreciate your comment. What language app are you working on?

Do you have any feedback or suggestions for us? I’d love to hear your thoughts on how we can succeed in this market. Any advice is welcome! :)


The security against downloading audio from YouTube has been upped recently with 'PO tokens'.

Whisper is only a few tenths of a cent per hour transcribed if transcribing on your gpu though, at about 30x real-time on a 3080 etc. with batching.


> The security against downloading audio from YouTube has been upped recently with 'PO tokens'.

do you have a source? more generally is there a community or news source for youtube "api" news like this?


I haven't been following closely the last few weeks, but you can check the issues in this repo, for example: https://github.com/distubejs/ytdl-core


Tbh I've not had trouble with this for personal use.


I was trying this with the original llama model. I guess the model didn't really know it's meant to be a 'knowledgeable ai assistant', but rather simulated chats it had seen. If you asked it, 'how to make brownies', it might reply, 'idk, can't you google it?'.


When you prime it with those initial 10-20 examples, the responses need to be in the style that you’d like it to respond to. You can use Claude or ChatGPT to help you write those. The model will then just continue on in that same style.


Unfortunately those examples blow up the cost compared to just asking the question. It's a nice workaround, but not always feasible. (Unless everyone adopts context caching like deepseek and anthropic did)


Did anyone try to check how are it's multilingual skills vs. Gemma 2? On the page, it's compared with LLama 3 only.


Well it's not on Le Chat, it's not on LMSys, it has a new tokenizer that breaks llama.cpp compatibility, and I'm sure as hell not gonna run it with Crapformers at 0.1x speed which as of right now seems to be the only way to actually test it out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: