> Your WiFi information is never sent to the server.
This makes me think that we could use some flag that identifies purely static websites. Like next to the green https lock there is a sign that this website can not send data to any server
I think something like that might be feasible in the future with things like web bundles[1] and content security policy[2]. In theory you could have a site loaded as a bundle and a CSP that blocks everything not loaded through the bundle, effectively cutting off all network access.
You'd probably also need some sort of Feature-Policy[3] that prohibits access to any form of persistent storage so sites can't just save data until the next non-CSP-protected page load and transmit it then.
So don’t allow GET with query params. You want the static moniker? It has to be static. No server interaction after load, and no sending any data during load.
How about after load, that tab automatically goes completely offline.
Users can manually do this in Chrome on a tab by tab basis by using developer console and setting Throttling to "Offline"
That sounds more promising. The site might be able to store data and then send it the next time the page is loaded. I think at the end of the day, a malicious dev could probably find a workaround to most implementations. Might just be better to vet out sites and use reputation to state they are truly offline.
I can also recommend drawing it instead of printing if you hate printers with the same passion as me. Record to beat for drawing a working QR code is 3 tries and I think about three hours (for an SSID of 13 bytes (5 unicode characters) and 19-character password). I did it mostly out of curiosity how hard it would actually be (spoiler: medium difficulty).
A remedy exists, it costs around 100€, is sold by Brother and is called "wifi-enabled laser printer". Your life will be free off printing woes onwards.
Had zero issues with wifi so far. I can even directly print from my iPhone. Being able to place the damn thing anywhere and hide it in a closet is great.
It's good to hear that the situation is improved. Even so, I'd still try to wire it up whenever possible because diagnosing printer problems are the most boring kind of problem ever in the history of mankind. If I ever have printer problems I want to minimise the variables.
Love my HL-L2360DW hanging directly off Ethernet. What a truck! Fast enough, auto duplex printing. Mine never jams. It just doesn't. Cheap cheap cheap to run.
I also have a Canon MG6220 for when I need color, which is not often. I gets used more as a scanner/copier than as a printer. It has also been really reliable, and since it doesn't print all that much, the ink cost has been quite bearable. I might not feel the same about the Canon if I didn't have the little brother laser printer available to pound out B&W documents when I need them.
If OPs experience is anything like mine, they just suck at doing their job.
There's often some silly reason its stuck, or it prints wrong, or the colors are off, etc, its really just a super painful experience and I can't believe its still so bad.
I am very surprised you have not heard this before, especially if you've used a printer in your lifetime. Yes, it's a "meme" (not sure whether to apply that word if it's not a badly drawn or photo-edited image, perhaps recurring theme is a better word?). You're one of today's lucky ten thousand!
> a meme is an idea, behavior, or style that spreads by means of imitation from person to person within a culture and often carries symbolic meaning representing a particular phenomenon or theme
Others said most of what there is to say but to give an answer from 'OP': them working reliably is the exception.
Out of yellow -> refuses to print black. DRM cartridges that you can't have refilled. Black ink being more expensive than blood. Driver problems. Network issues. Paper jams. Out of paper error when there is paper. Trying to draw from the wrong paper tray. It doing five minutes of warmup exercises when you really want to get going but forgot to print this signed form to hand in. A new printer being literally free if you just buy the ink with it (my grandma has one of those). And so on.
It's insanity more often than not.
Ironically, Linux drivers are (as of ~2013 I've noticed this) more reliably than Windows ones. For scanners also. My dad had to re-add the printer (which works fine on Linux) every time on Windows in the configuration panel. It would say 'offline' but could be discovered and then printed with just fine. But only until it goes into standby mode, the next day it's the same story. Now he got an expensive scanner (the kind where you get business support), same story: it just doesn't see the device half the time. This time it's clearly a device issue though: it also only responds to ping when the computer can work with it (i.e. the device is just unreachable when the computer says it's unreachable; not a driver issue for once). And you pay a few hundred bucks for that.
The big ones at school or in bigger companies, those seem to work reliably most of the time these days, and if one is out of order you can go a floor up and use the next one. I also remember my old boss (RIP) had a printer that never had a single issue -- of course the model went out of production by the time it gained the track record (all I remember is that it was a Brother laser printer, no colors, only one paper tray, no scanning... all that probably helped). Perhaps you, too, had one of these lucky models and were spared these printer problems.
But so yeah that sparked a whole category of jokes on the internet, most people having this experience (at least until a few years ago, perhaps it has gotten better? People also just don't print as much).
I used to hate printers too but that changed when I started using HP Instant Ink.
The printer needs to be connected to wifi all the time but on the flip side, the printer orders ink cartridges well in advance (so they get delivered in time and I have them ready when I need).
Wow, I'm blown away by the response here, more in upvotes than in comment volume but that doesn't make me any less happy to see it. Your comment and u/dmitryminkovsky's are very kind, thank you. I'm glad you enjoyed the experience :)
I think the risk model for this sort of thing is still... complicated, even though the site as it stands is safe and does not transmit credentials.
After first gaining popularity, the domain could later pass to someone with malicious intent quite easily, eg:
1. Tech people like HN verify the site as credible and approve of it
2. The site gains popularity and goes viral / receives significant use
3. The original author abandons the site because of costs, or simple boredom
4. A malicious actor acquires the domain and begins recording users credentials alongside IP addresses.
Conceivably, an enormous amount could be captured before the malicious recording became exposed, and (importantly) most of those whose credentials were compromised would have no straightforward path by which they could be alerted.
Other scenarios: site is malicious to begin with but set up to not transmit credentials during its first ~28 days.
Mirrors of this site with credential capture added, (hard to claim that's a flaw of this site itself, just a flaw with "this type of site being normalized").
Note that today in Chromium-based browsers this does _not_ take a page completely offline - it only affects some specific internal HTTP request APIs, so it's still quite possible to exfiltrate data. For example, WebSockets & WebRTC aren't affected: https://bugs.chromium.org/p/chromium/issues/detail?id=423246 (although it looks like there's work to start to properly support this in progress).
I'm suggesting that monitoring an application for malicious behavior to detect it after the fact is the wrong approach. Once the data is sent, it's too late to do anything about it.
Oh you first try fake data? That's easy to counter. For example probabilisticly: the app tosses a coin and only sends the data with 50% chance the first time. Now half of people using your approach will think the app is safe and get their data stolen anyway. Or use some side channel, delay activity, ...
Yep, I kind of wish browser permissions let you gate access to the network just like they do the camera, mic, location, etc. (Though, network permission would probably need to be enabled by default to not break the Web.)
It would be really nice if the source code in the browser (no, not the Github repo; how do I know the web server served the same thing as the Github repo?) were displayed in a human-readable format with proper tabbing, comments if applicable, and not an obfuscated, minified form.
In fact if your goal isn't specifically obfuscation, it's not necessary to minify JS in general. Web servers do a good job gzipping stuff.
We’re moving far, far away from the observability and introspection that once made the web a transparent medium. First it was minified JS, then it was minified and obfuscated JS, then it was using a canvas instead of the DOM to prevent even HTML inspection, and now it’s binary executable payloads (WASM) rendering to a WebGL context with zero introspection possible.
Hm, we could have the best of both worlds if source maps (can already be served to browsers and are supported fine) could be proven authentic.
I believe right now, I could use foo.min.js and serve you a cruddy foo.min.map.js to mislead you.
If I served you the original sources foo.concat.js and a build script to go from foo.concat.js => foo.min.js instead, we could have both the speed of the minified version and the (proven by the browser) accurate source code and maps!
I do too. Even the web extension APIs don’t seem to comprehend this. It’s super disappointing and it’s holding back extension development and enabling a massive malware problem.
Yes, but not directly via a service worker (aren't they blocked from network access in general?) - you'd have to trigger the main thread to sync with the service worker, then perform the exfiltration for you.
The original way to do this via a synchronous XHR request in visibilitychange/beforeunload/unload handlers gave you a 1-2s window to exfiltrate the data. That's been deprecated in exchange for a more insidious "invisible background connection to the server maintained after closing the tab or navigating away" that doesn't involve any UI delays. (For "a better experience" of course, not for stealth! Never!)
It's called the beacon API [0] and it's supported by basically all browsers [1]. This was introduced because, chiefly among others, Google Analytics was stuck between blocking in head to record the page view or moving to the end of the html document and using documentready but with a fairly high risk of user navigating away before all that had a chance to trigger at all or after it triggered but before it could exfiltrate the data.
Wow - that is just sad to see from Mozilla (sighs, where did all the good guys go?)
Excerpt from their KB
Navigator.sendBeacon()
.... It’s intended to be used for sending analytics data to a web server, and avoids some of the problems with legacy techniques for sending analytics, such as the use of XMLHttpRequest.
And it's crazy you can't disable phone service on iPhone.
Also, if you worried about someone accessing your network then you probably need to isolate your devices from seeing each other.
TBH other than pwning my router or misusing my IP address in some way I don't see much problem having my devices on public net. All of them run firewall and are up to date. Most traffic is HTTPS and I'm not sure if you can MITM with just Wifi password lol.
My iPhone has a pre-defined "Make QR Code" "Starter Shortcut" that you can invoke by typing "Make QR Code" in Home screen search. It basically generates a QR code of the same WIFI:S:$SSID;P:$PASSWORD;; format as wificard.io.
Note to anyone blindly cutting and pasting this, the final EOF needs to be alone on the line with no indentation (VSCode did me a disservice and formatted it poorly which confused the hell out of shellcheck but left me learning something today, which was nice).
Since OP intended to learn a bunch of technologies while developing this project I'd be amused to see a version of this based on qrencode complied to WASM.
There is an inherent skepticism about typing your password into a web form even/especially if it says it is not sent to the server. My read of this comment was on the one hand telling security conscious people how to achieve the same goal in a way that does not leave the computer, and on the other hand putting a lower bound on the complexity of the value provided by the site.
I think the point of simplicity here is that you apt/yum/yay/whatever install qrencode and then go to town.
You're on your own machine, you know the exact spec used to generate the QR, and while you're at it, you can use the same tool to generate TOTP QR codes and WireGuard configurations and a bunch of other things like links to websites or automations for Tasker (et al).
The world is your oyster when you know what and how the "magic" works, and while you're at it, you aren't risking pasting your WiFi passwords into some random website.
I didnt read it as a criticism necessarily. Qrencode is just awesome. I like that someone made a UI for this though so non-ghost-in-the-shell folks can get down with this awesome use of QR code dopeness :^)
As other comments have pointed out, the tech stack used here is excessive for what it is trying to do. The author has said it was a project to learn some specific technologies. There is a lot of mirth for overcomplicated contraptions mucking up otherwise simple tasks (and rightfully so, IMO), hence the retort of a one-line console command.
It's not excessive. The linked post is way easier to use and doesn't require a terminal or installing packages manually. I wouldn't even know how to get the cli version to work on windows.
The only downside IMO is that you are entering your wifi details in to an untrusted website.
The command-line app is effectively eternal as long as it's mirrored and the runtime interface doesn't change. Can we count on this link to still be there when we need it?
Do you know how long most "Show HN" links still work after a year or two?
It's literally a webpage with some javascript, so if you save it along with the asset files, you can just open the page in browser from your filesystem, no need to specially host it or start anything.
But yes, I agree running a command like qrencode from terminal is easier in the long run.
Yes, but it depends if the node.js app has the option of generating a fully static site. But it will also need to generate relative links for the browser to be able to directlyy load the site
Late reply, but when I was setting up WPA3 a few months ago I used Android 11 as a reference and that is what it spits out from the Settings share screen. Without it iirc, it would search and search and never connect.
Why has it never occurred to me that I might be able to use a QR code? I always do `sudo cat /etc/NetworkManager/system-connections/TheSSID.nmconnection | grep psk`, type in my root password and then re-type the passphrase manually on my phone. The amount of time this is going to save me…
I was always aware my phone has an option to scan a QR to connect to WiFi, but I never thought to look up the QR spec for it. I had no idea nmcli had this functionality either, though I have used qrencode a lot for TOTP and WireGuard.
There is no real specification. This is the best source[1], which documents QR standards.
>There are some standards -- de facto and otherwise -- already in use. This wiki attempts to catalog some possible standards for encoding various types of information, and suggest a standard action associated to them.
Damn, that's saddening, but also answers a question I had about what the escape character is for a password containing `;` -- I guess there's no specification to tell me :-(
OTOH, I guess a "standard" is whatever the common implementations accept anyway, so if one could dig up the source to the WiFi QR scanner in source.android.com that's the 2nd best thing
In fairness, the string doesn't seem insaneo: T:=Type S:=SSID P:=Password then only the leading and trailing chars need to be "remembered", plus whatever the escape character is for that password string (or maybe even the SSID -- I'm not super clear on what characters can be in an SSID)
If the options for helping someone are sending them a URL or explaining how to install termux and then a new package and then running an obscure command in it, I know which option I would pick.
My new router came with the QR code stuck on. ios and apparently android have wifi sharing but its still going to be problem when an android user visits an ios user.
I generate the QR code and then add some text to make a kind of poster and print it out. Ios and android users can then just use their camera app to automatically connect to the wifi.
I've met tons of people who will argue with you that it's not broken because "A trailing space is no big deal. You can't even see it." I try to explain to them that you may not see it, but the computer does. They just look at me with that blank stare as if I'm crazy and they're right. At that point I tend to drop it and take a mental note to never help that person with computer issues of any sort.
This repo of mine walks through how to use a raspberry pi with an eInk screen to automatically update passwords and the resulting QR code. Would love to see what y'all think!
I’m such a huge fan of that functionality. It doesn’t always work perfectly (i.e. pick up someone nearby trying to join the network quickly), but when it does, it’s such an awesome example of tech making our lives better in such a simple but helpful way.
The QR code seems to have the same amount of friction as the share password. It's something a guest can do asynchronously without needed someone else to do something.
Also, I like the use of guest SSIDs. My guest SSID is just like my main SSID but with L2 filtering for all traffic not going to the gateway. Guests can use my fast internet, but just not interact with my LAN or other guests. I also don't enforce WPA3 only on my guest network for legacy support.
Asking as a person who understands the fundamentals of all of the technologies involved but is horrified by much of what goes on in the minds of today's web developers and designers, what motivates the decision to make this repo use make, docker, yarn, npx, nginx, react, and jest rather than a bit of static HTML and css and something like qrjs2.js or VanillaQR.js with a simple canvas or datauri update in onInput?
Answering as the author who also understands the fundamentals of all of the technologies involved and questions much of what goes on in the minds of today's web developers and designers--
I use Make as the standard way to interact with every repo I own. This allows me to type `make build` instead of `$some-language-specific-command-I-forget-in-2-weeks`.
I use Docker for distributing every app I build. If the app is a website I also use the nginx base image. Docker images make packaging and distribution a breeze IMO.
Regarding yarn, npx, react, and jest: I'm similarly disillusioned by the churn but I also like to remain knowledgeable as the industry evolves. React was something I hadn't touched before, so I decided to pick a simple project to give it whirl ;)
It's also not like you'd have to have a separate server/vserver for every such project, a very affordable vserver will run quite a lot of nginx containers. Plus I guess some people will have a vserver or something like that anyway (I have a small one to get around NAT at the ISP level), in which case it might be net-zero.
My ISP puts everyone behind some sort of NAT, which means I can't reach my home network from elsewhere since I can't open ports. They'd let me buy my own IP address for a monthly fee, but I've found that an old Pi I already had, an autossh-ed reverse shell, SSH forwarding and a cheap cloud instance work just fine for checking sensors and the like, cost less and the cloud instance has other uses, too.
> I use Make as the standard way to interact with every repo I own.
After much fussing around with many kinds of solutions, this too is what I have settled on. Download repo and run `make` will "do the needful" to get you going, and all the major entry points are make stanzas.
> I use Make as the standard way to interact with every repo I own. This allows me to type `make build` instead of `$some-language-specific-command-I-forget-in-2-weeks`.
I also used to do this until I switched out Make by Just[1]. I find it worth a recommendation.
Just like perl and grep, I stick to make because it's likely to be already available everywhere I need it. Such is the tyranny of the installed base. :/
People love to constantly say "Want to learn to code? Make projects! projects! projects!" and then when someone makes a project as a means to learn a particular technology, now people turn up their nose and say "ew, why did you use this?"
Anyway, this project provides exactly what I needed. Thanks to OP for sharing! Slick and simple
It's fine to criticize, but it's very likely "release wificard.io" was a bonus on top of the main goal of "learn react, nginx, docker, jest", so the author could have used simpler technology if it weren't for their objective of obtaining hands-on experience.
Sure but if you want to learn a specific technology by making a project, it makes more sense to make a project that can actually benefit from that specific technology.
Many of these are not critiques, but rather not-so-subtle implications that the author must either not know what they’re doing or is trying to show off.
A lot of developers only know the new, needlessly complicated ways of doing things. I have met professional software engineers who have never heard of shared web hosting, for example.
It's also surprisingly possible to learn enough about programming to get a job without understanding basic computing concepts. I've met professional software engineers, with multiple years of experience and promotions under their belt, who did not know the difference between hard drives and RAM in a server context. I've literally code-reviewed attempts to deploy a database server with 1 TB of RAM in order to store 1 TB of data.
I know the different between RAM and disk in a server context (what sort of programmer wouldn't?!), but I've worked pretty exclusively in no-persisted-storage environments (mostly Heroku) for ~5 years now, where referring to "saving" files/data outside of a DB context almost exclusively means "save to memory" and I routinely run into problems whenever my stack requires downloading any sort of third-party library that would have to persist outside of RAM (e.g. downloading NLP libs).
When you abstract "storage" into "my library/tools save state somewhere" (and god forbid you include localStorage into the mix!) and don't deal with the hardware itself, I could see how a lot of new-ish coders wouldn't be able to differentiate RAM and disk.
I thought that was crazy, but reflecting over my CS education they haven’t gone over the difference between disk space and RAM. I suppose they figure everyone already knows?
In American terminology, individual classes are called "courses". This particular course is likely "required" as one of the steps in a Computer Science "major".
It could be meant not in a way of "this is ram vs this is disk", but "we have a 1tb dataset, what kind of a server do we need?", and the person not knowing enough about databases to think they need 1tb of ram to be able to run it.
>A lot of developers only know the new, needlessly complicated ways of doing things.
Yeah that was inevitable in an industry that moves quickly, likes new stuff, and prefers fast, superficial learning when that's all you need to get your product out of the door. Google and StackOverflow are probably controlling a decent chunk of decision making these days such that you don't need to think about solutions too much.
If I had done this project it would probably use node.js + gulp + sass + pug + browserify + babel + bootstrap ... or maybe Webpack instead of gulp and browserify (depends on the project).
Why? Because I have a website template that solves many things for me, like:
- I see no reason not to use SASS or the latest ES even though I know how to. Or pug, for that matter... which I like. I also have a SASS template with some utilities and a particular code organization.
- Using an ES bundler allows you to throw in libraries from npm. I would not write the QR code myself, for instance.
- Automatically watches sources and recompiles.
- Adds hashes to asset filenames in order to cache-bust changes in CSS/JS/images (critical when using a CDN).
- Has placeholders for things I'll probably need, like the
metadata for building previews in social networks.
- Is prepared for dealing with i18n, if the need arises.
- Future-proofing, since 80% of projects you think are small end up becoming larger. This is a single-page site, but if I wanted to publish this in Europe it'd already need two extra pages for the Privacy Policy + Impressum... so the single page site suddenly needs to worry about navigation.
- It's prepared for quickly deploying to AWS or Github Pages. It could be quickly tweaked for working on Cloudflare Pages or other hosting/CI environments that do the compilation for you.
And most importantly...
... my stack does not negatively affect the end result. All the extra baggage is just part of the development environment. If you want to skip my tools, they're quite easy to bypass and replace for any other transpiler... or you can just ignore my sources, reindent my compiled files and work on them directly.
PS: Back in the 90s I drew complex table layouts on graph paper, typed them down with vi, and ftped them to the hosting. I'm well aware of the alternatives. My current workflow + templates + helpers are based on the need to efficiently juggle A LOT of completely different projects every year as a freelance developer.
> All the extra baggage is just part of the development environment. If you want to skip my tools, they're quite easy to bypass and replace for any other transpiler... or you can just ignore my sources, reindent my compiled files and work on them directly.
This might be technically true, but is not true in any meaningful sense if another developer ever has to work with your code. They will have to deal with all your baggage, and their job will be much harder because of it.
If "you have to make it easy for third parties to fork your website" were a requirement in a particular project, I would still use the exact same stack but output non-minified pretty-printed code to some subdirectory that will get commited to the repository... so they don't even have to run "npm install && npm build" if they don't want to or don't know what npm is (although the README in my template provides these instructions).
... but I would still get advantages in terms of speed, maintainability, etc. from using a modern web development environment that I see no reason to give up. There are things like cache-busting hashes, linting, splitting js code in modules and using variables in SASS for things like colors, that to me are mandatory in a professional practice.
> "you have to make it easy for third parties to fork your website"
That's not the requirement I'm talking about. It's "multiple developers you don't know, of varying experience levels, will work on this project over the course of years".
They're going to clone the repository and have to make changes. Ignoring your stack is not an option. They have to figure it out, or throw it away and rebuild something else.
They'll be the ones paying the time cost of all the advantages you get by doing something complex but easy for you.
You may say this doesn't apply in your situation, and maybe right now it doesn't, but it almost always happens in any successful project eventually.
I'd much rather work on projects that used as many standardized common tools as practical to do the job, versus a project where someone did some kind of code golf to try and use as few tools as possible to save some negligible amount of server space or bandwidth. I'd expect it would be much easier generally to find other collaborators for the first type of project too.
I often choose simple, useful projects for learning new technology… i can’t learn something new if I am not using it for something useful, but I also don’t want to try to learn something new WHILE also solving a challenging problem… so something simple like this is a great sweet spot to learn new tech.
This app seems its using create react app, so agreed the app might come with a few extra things that you might not need. However, there are a few reasons I might choose CRA over static HTMl. Don’t get me wrong, HTML is great, but the developer experience can be primative.
Here is why I like using CRA for some projects:
1. Live reload. This comes for free with CRA. Makes dev work easy.
2. As easy as installing a component and getting started. And they logically fit in the code flow. Native import libraries, you have to write the JavaScript and point em to your divs and dom and initialize em.
3. Let’s say in the future, I want to reuse the code logic in a different app, I can just take the js and element as one unified component and move it across.
Tbh, for an app like this, after doing a production build (npm run build), I don’t think it’d make a radical difference in performance with raw html or react. Might just be dev preference, and ease of use.
Docker with nginx because that’s your delivery pipeline. It’s much more common to have a place to run an OCI container than a directory to upload static HTML. CDNs operate on this model but there aren’t many turnkey “on-prem” solutions. The tooling to run a container and get it hooked into your web tier with routes is actually the easier path these days.
Edit: More
The reason for this model is that it makes everything the same and your caching tier is the great equalizer. Everything is a backend for your caching SSL terminating reverse proxy. And your static site will live in the cache for ages so once your cache is warm there’s no real performance hit.
I'd say that a tiny amount of CSS would make that basically perfect for the task, except that it's honestly already perfect for the task now. Though it looks like you replace semicolon a second time instead of colon in clean.
The same reason I take a massive V8 engine attached to a massive truck bed one mile over to my friend’s place to watch England lose at the Euros. Turns out that sometimes, even when tool X isn’t the best at the job, it’s way better for me to use what’s at hand.
This is what a lot of people have trouble understanding: sometimes the best tool for the job is the tool you’re best in.
Evidence? No one has made this static website you’re talking about. It doesn’t exist. This one does. The imperfect app that exists beats the perfect app that is vapor.
> No one has made this static website you’re talking about. It doesn’t exist. This one does. The imperfect app that exists beats the perfect app that is vapor.
Another poster whipped one together since this was posted, because it's so trivial to do with vanilla tech:
I also don't think anyone has a problem with someone who says: "I dunno, this was just the way I learned and I don't know the underlying tech."
But that's usually not what happens. Instead, there are endless rationalizations for why the obvious over-engineering is not only okay, but preferable.
Probably to learn those various technologies. Not only did they use a bunch of new tech, they made something other people can use in the process. Better than making a bunch of throwaway code.
What motivates you to use a feature rich browser on an OS with graphical user interface running on on-premise hardware including your own internet line and all those peripherals if you could just use CLI from a good old VT100 hooked up to your nearest mainframe to HTTP POST a comment on HN?
Yeah, probably because nowadays you just do it like that.
I got a QR code like this on the wall with the SSID and the password written on the paper as well as a NFC tag with the wifi credentials between the paper and the wall.
I have a QR code for guests and Wi-Fi too. Unfortunately it doesn't seem to work as expected on iphones but works fine on Android. Wouldn't think a QR code was platform specific.
I like to use make as it's available everywhere. I use Makefiles with podman to create images, containers, pods, start, stop them and display logs. With the alias 'm' for make I can execute complex actions with very few keystrokes. Also variables in Makefiles make it easy to updates labels, volumes, etc. But I would not use make for very complex projects.
Make is tremendously useful for running jobs with (static) dependencies. I used to run make for managing Docker instances until docker-compose became a thing.
Can you actually point your phone camera at a QR code and have it connect automatically, as the website says? I generally avoid QR codes as they obfuscate the target URL, but I thought you needed an app for reading them on Android and possibly also iOS.
It’s definitely built into iOS, just point the camera app at it and it’ll tell you what the domain behind the qr is, then just touch to follow the link.
Something like wifi will have a custom prompt that says “Connect to WiFi network MyNetwork.”
The default Android camera app will read them for you (and display the URL before you tap to open it in a browser) by either pointing the default camera at a QR code or manually scanning it with the "Lens" camera mode.
The Android wifi menu (where you select from nearby networks to join) also has a QR icon that lets you scan to immediately join a network (next to "Add network"). I'd imagine you'd also join if you scanned it from the camera app, too.
Really cool, despite that some comments show that generating such QR codes can be done with a simple command; because using this service is faster for me as a person who is not familiar with QRs.
If you're going to do that, simpler to just use `qrencode` with the same contents as this is using:
WIFI:T:WPA;S:${ssid};P:${password};;
Unless you need to frequently generate WiFi login QR codes (single purpose!) from a device without a convenient command line, e.g. mobile, there's not really a reason to self-host this.
(It makes sense for OP/the project to host it as a demo and for people who don't care or trust it to use though - I'm not hating on the site existing.)
Running it on Firefox with web inspector opened, I saw no network traffic at all after the page loaded, so it wasn't sending the network name or password anywhere off my device.
They do show up, not the connections themselves, but the initiation, no?? At least I always see lots of 101 Switching Protocols responses on sites that use websocket, it might be some sort of nonstandard gateway though. Never used it so dunno.
Of course even if that's the case, you gotta load the website with the network inspector already open, to see the initiation.
It only blocks IPv4 traffic, so on an IPv6 enabled host it can easily be bypassed without even involving DNS. But assuming an IPv4-only host, on a system using nscd, DNS lookup is performed over a Unix socket:
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
23 1445 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 owner UID match 1007
# setuidgid outtest ping -n 1 -4 google.com
PING google.com (142.250.102.100) 56(124) bytes of data.
Probably should also run it in a sandboxed tab and delete all site related data after use. It is possible for sites to detect network connection and sync data when a connection is re-established.
EDIT: you could also just run the local server in offline mode and shut it down before you re-connect.
I am not sure I understand the app. Is it 2 input fields one for name and one for password?
Which you then print?
Is that the functionality - or have I misunderstood?
Seems to be using the same format as https://qifi.org/, which lists supported devices, but of course due to the lack of native QR on most Android devices it's per-app rather than per-phone.
I don’t get it… is it that every single android oem makes their own camera app? Or did google choose to just not have barcode/qr code as part of the default camera app?
If I remember correctly from playing with this stuff a few years ago, there’s a simple text format for WiFi SSIDs and passwords that’s recognized by both iOS and Android, among others.
A nice addition would be either (1) support other languages or (2 - simpler) the ability to edit the field titles (e.g. change from "Network name" to "Nome da rede").
It is open source, you can simply CTRL+F and replace those if your case requires. Or you can create a dropdown somewhere with those variables to replace the placeholder texts etc.
You can also set up an NFC tag for WiFi. It works basically the same but saves with fiddling with your QR reader, just tap the phone to the tag and click connect.
At first I was like "OMG, this is going to be awesome, this person has a QR generator which will magically connect me to the WiFi with my password embedded in the QR somehow. I am going to have to read the source... This is gonna be great... but then, disappointment. It is just a card with my password in plain text. Why the heck would I ever print this out?
a QR generator which will magically connect me to the WiFi with my password embedded in the QR somehow.
This is indeed what it does, but it also includes the plaintext password in case you want to connect a device that doesn't have a camera, like a PC. There's an open issue for adding a "hide password" option. You could also just cut off or scribble out the password on the print out.
This is for public encrypted guest WiFi networks, to make guests lives more comfortable... you can scan this with your phone to automatically connect without needing to type the password which can come handy if the password is at least somewhat secure.
Usually it's printed in the same places where you would actually print your plaintext Wi-Fi password, if you're already doing that, for example in restaurants (e.g. QR code inside a menu card that's only given to actual customers), offices (QR code inside meeting room for actual guests), airbnbs (QR code on a fridge inside your house), hotels (QR code inside the room with a router))
I’m guessing the whole point behind the “standard” is to allow quick and easy access to protected networks. It’s an alternative / compliment to verbally telling someone the passphrase, or having it printed in clear text on a paper.
Think of hotels / offices with guest networks. I use it for my wifi, my home network is not secretive enough to not let friends / family join it when at my place.
Wifi passwords aren’t supposed to be secure. Really they’re only to keep people off the network that you want to keep, well, off the network. The current method of connecting someone to wifi is usually just telling them the password. If the guest has a computer on the network (or a mac signed into the same apple ID as an apple device on the network), it’s trivial to figure out a network password that was entered for you.
If you’re wanting a _really_ secure network, WPA2 isn’t the way to go. You’d want to credential every user using 802.1X or WPA2 Enterprise.
It depends on the qr-code scanner you have on your phone. The one on my android shows a button 'connect to this network'.
In regards to security, you are completely correct. But this is to be used in cases where one would put a paper with the password on the wall, think of coffee shops. Saves some typing. Or you can put a card on your coffee table to help out your house guests.
Suppose you have a long password for your WiFi and want an easier way for friends to connect to your network without having to type a complex password.
more QR meh -- how does one use this? Assuming everyone has native QR code ability that ties into Network setups? Sorry if I'm behind but is this standard in Android now? (who cares about iOS, let's go with broader platform for now)
Is your threat model really so severe that it includes someone driving to your house and connecting to your wifi network? Of all passwords, a wifi password is one you should _never_ reuse.
You could also inspect the source, it’s open source, or network requests.
opening in incognito tab with internet disconnected to make sure it works offline and that nothing gets sent after you close the incognito tab should be safe & secure but yeah you can also google the proper qr code format and make it yourself using qrencode in terminal or something like that...
but for semi-public WiFis like the ones in restaurants, hotels, ... it's a survivable risk
Depending on how remote your farm is and how strong is the WiFi signal outside your premises (from a nearby public road, ..).
Average case you are in the middle of bumfuck nowhere with a huge private property surrounding your WiFi and nothing's gonna happen.
Worse case some script kiddie attacks your vulnerable router (if using default password) or a smart gadget (if you have some and it's a couple of years old) to join a botnet and get your IP address on a blacklist limiting your internet usability (blacklisted IPs may not be able to send e-mails, may get captchas on major websites like Google, ...)
Worst case someone driving around noticing you have an open WiFi may drop a battery/solar powered raspberry pi with a 4G modem nearby your WiFi and use your WiFi as an untraceable VPN/proxy to perform some illegal stuff (e.g. upload child pornography or perform some serious hacking) and getting you in trouble with the law.
This looks like a simple WiFi password generator, a cooler solution would be to generate one-time passwords for guests.
When someone comes to you, they can click on the touch display to generate a password, each guest can then have a separate VLAN.
Not really sure what use knowing my wifi password would be unless you were specifically stalking me anyway.
You have a password and an SSID for somewhere in the world (maybe, if they didn’t lie) or if tracing IP addresses, somewhere in a neighbhourhood or city).
Seems about as useful as an MH 370 flight data recorder password (if such things exist). Yes, you could break into the black box with it, but that’s not the real problem.
Luckily, you don’t have to believe - you can see for yourself by opening Developer tools in your browser and monitoring requests under the Network tab.
I don't think this simple app would have bad intentions, but your workaround isn't bulletproof either. The fact that the user has opened Dev Tools is detectable (I think the console object becomes != null?), so if the author has bad intentions, he can just unload the bad stuff when he detects that Dev Tools has been opened.
I've seen some sites unload the page and set up a breakpoint when Dev Tools is opened, so you can't even browse the source in it. I guess the trick would be to download the page's sources using wget, but if they used some obfuscation it becomes more painful to figure out what files need to be downloaded...
This makes me think that we could use some flag that identifies purely static websites. Like next to the green https lock there is a sign that this website can not send data to any server