There are a few distinct things here. Learning 6502 assembly is straightforward, and you'd be better learning about simple (not modern) assembly languages at a high level--opcodes, registers, noop, branch, jump, compare, accumulators, program counters, and clock cycles. From there, start writing 6502 in an emulator and seeing what happens. That's where you're going to learn, and the feedback will be a lot faster. Programming for an Apple II will be more about learning how to interact with devices through memory.
Julia Louis-Dreyfus? (Although looking at her filmography, it turns out that four of the movies I’d seen were where she’s had big roles in were just in the last three years.
fun fact: her father is billionaire. i guess that predisposes an acting inclined person to take a risk like seinfeld but keeps you from being hungry enough to walk hard.
I was at Cisco when the Apple iPhone was announced. It was rumored to be happening, so Cisco rushed out a Linksys VoIP(?) phone rebranded (it might have just been a sticker) as an "iPhone" so they could defend the trademark. They quickly reached an agreement with Apple. I remember they might have been getting their VPN included on the device. I'm sure there was a similar issue with iOS, and that caused me to get a lot of not-so-relevant emails from recruiters looking for mobile devs.
This goes against Hyrum's law. NAT provides the behavior 99.9% of users want, usually by default, out of the box. True firewalls can do the same thing, but not necessarily by default, the firewall might not even by on by default, and there's more room for misconfiguration. IPv6 is a security regression for most people, regardless of its architectural merits or semantics of what's a firewall.
I wouldn’t put the number so high. I’ve on several occasions seen not very technical people unnecessarily burn money on VPSes or dedicated hosting providers because they couldn’t expose a game server for a evening session with their friends with the spare capacity on their gaming machine, because of their ISPs NAT setup. 90% would be fairer. However we still shouldn’t be sacrificing securing agency of individual consumers for securing smoother revenue for corporations.
I brought up CGNAT because my American ISP does use CGNAT. We are now paying an extra monthly fee for a static IP, which I believe is the only option they have for getting a public IP (i.e. no intermediate fee amount for a public non-static IP).
It might be more fair to say that most American residential ISPs don't have to do that because they have access to giant legacy IPv4 allocations. Comcast alone has 65 million IPv4 addresses, for example (including a /8, /9, and /10 and several /11s).
I think they could make more money using CGNAT and leasing those IPs out to data centers. Also another comment in this thread mentions that their cellular plan sold as a residential internet connection doesn't use CGNAT, but their phone plan from the same company does..
Maybe! CGNAT isn't free, of course, you need pretty beefy machines to handle ISP numbers of clients. So, is the capex for the machines, engineering time to set them up, and opex for keeping them running more or less than they'd make back from leasing their net blocks? Hard to say.
NAT implementations get broken all the time (NAT slipstreaming attacks). If a manufacturer is incompetent enough not to have a firewall on by default, they are probably also shipping a vulnerable NAT.
NAT slipstreaming depends on confusing fragmentation assemblers and application aware parsers. Those exist in firewalls as well. It’s not NAT specific.
And that kind of NAT effectively doesn't exist in practice, so that's quite beside the point. Such a NAT doesn't scale to more than 24 devices behind it.
See my reply to your sibling commenter. My comment was not about NAT in general, i.e. I was not denying the very real existence of stateless NAT. Rather, I was disputing the usefulness of the NAPT solution proposed above as a solution to public IPv4 address exhaustion.
> proposed above as a solution to public IPv4 address exhaustion.
It was not proposed as a solution (although, it would work). I'm pointing out that in networking many names are conflated/used generally against their specific definition. NAT/Firewall; Router/Access Point/Gateway; etc.
No, it very much does. If you want to join two network segments such that on one side all devices are on 10.1.X.X and the other all devices are 10.2.X.X, you'd use a mapping between 10.1.a.b and 10.2.a.b
The general context here is about NATting to the public internet at large, not between particular segments. And the parent of my comment was talking specifically about NAPT, which is different from the non-port-based NAT that you're talking about.
This is a terrible argument. First, NAT doesn't provide the security behavior users want. The firewall on their router is doing that, not the address translation. Second, that firewall is on by default, blocking inbound traffic by default, so why on earth would you conjecture that router manufacturers will suddenly stop doing that if NAT isn't on by default? Third, it's not remotely likely that a user will misconfigure their firewall to not secure them any more. Non-technical users won't even try to get in there, and technical users will know better because it's extremely easy to set up the basics of a default deny config. There is no security regression here, just bad arguments.
The firewall on your typical IPv4 router does basically nothing. It just drops all packets that aren’t a response to an active NAT session.
If the firewall somehow didn’t exist (not really possible, because NAT and the firewall are implemented by the same code) incoming packets wouldn’t be dropped, but they wouldn’t make it through to any of the NATed machines. From the prospective any machine behind the router, nothing changes, they get the same level of protection they always got.
So for those machines, the NAT is inherently acting as a firewall.
The only difference is the incoming packets would reach the router itself (which really shouldn’t have any ports open on the external IP) reach a closed port, and the kernel responds with a NAK. Sure, dropping is slightly more secure, but bouncing off a closed port really isn’t that problematic.
NAT gateways that utilize connection tracking are effectively stateful firewalls. Whether a separate set of ‘firewall’ rules does much good because most SNAT implementations by necessity duplicate this functionality is a bit ignorant, IMO.
Meanwhile, an IPv6 network behind your average Linux-based home router is 2-3 nftables rules to lock down in a similar fashion.
It's also trivial to roll your own version of dropbox. With IPv6 it's possible to fail to configure those nftables rules. The firewall could be turned off.
In theory you could turn off IPv4 NAT as well but in practice most ISPs will only give you a single address. That makes it functionally impossible to misconfigure. I inadvertently plugged the WAN cable directly into my LAN one time and my ISP's DHCP server promptly banned my ONT entirely.
> In theory you could turn off IPv4 NAT as well but in practice most ISPs will only give you a single address
So, I randomly discovered the other day that my ISP has given me a full /28.
But I have no idea how to actually configure my router to forward those extra IP addresses inside my network. In practice, modern routers just aren't expecting to handle this, there is no easy "turn of NAT" button.
It's possible (at least on my EdgeRouterX), but I have to configure all the routing manually, and there doesn't seem to be much documentation.
You should be able to disable the firewall from the GUI or CLI for Ubiquiti routers. If you don't want to deal with configuring static IPs for each individual device, you can keep DHCP enabled in the router but set the /28 as your lease pool.
In the US many large companies (not just ISPs) still have fairly large historic IPv4 allocations. Thus most residential ISPs will hand you a single publicly routable IPv4 regardless of if you're using IPv6 or not.
We'll probably still be writing paper checks, using magnetic stripe credit cards, and routing IPv4 well past 2050 if things go how they usually do.
Went to double check what my static IP address was, and noticed the router was displaying it as 198.51.100.48/28 (not my real IP).
I don't think the router used to show subnets like that, but it recently got a major firmware update... Or maybe I just never noticed, I've had that static IP allocation for over 5 years. My ISP gave it to me for free after I complained about their CGNAT being broken for like the 3th time.
Guess they decided it was cheaper to just gave me a free static IPv4 address rather than actually looking at the Wireshark logs I had proving their CGNAT was doing weird things again.
Not sure if they gave me a full /28 by mistake, or as some kind of apology. Guess they have plenty of IPs now thanks to CGNAT.
More like even if they looked at the logs they aren't about to replace an expensive box on the critical path when it's working well enough for 99% of their customers.
I once had my ISP respond to a technical problem on their end by sending out a tech. The service rep wasn't capable of diagnosing and refused to escalate to a network person. The tech that came out blamed the on premise equipment (without bothering to diagnose) and started blindly swapping it out. Only after that didn't fix the issue did he finally look into the network side of things. The entire thing was fairly absurd but I guess it must work out for them on average.
Did you even read the second paragraph of the (rather short) comment you're replying to? In most residential scenarios you literally can't turn off NAT and still have things work. Either you are running NAT or you are not connected. Meanwhile the same ISP is (typically) happy to hand out unlimited globally routable IPv6 addresses to you.
I agree though, being able to depend on a safe default deny configuration would more or less make switching a drop in replacement. That would be fantastic, and maybe things have improved to that level, but then again history has a tendency to repeat itself. Most stuff related to computing isn't exactly known for a good security track record at this point.
But that's getting rather off topic. The dispute was about whether or not NAT of IPv4 is of reasonable benefit to end user security in practice, not about whether or not typical IPv6 equipment provides a suitable alternative.
> But that's getting rather off topic. The dispute was about whether or not NAT of IPv4 is of reasonable benefit to end user security in practice, not about whether or not typical IPv6 equipment provides a suitable alternative.
And, my argument, is that the only substantial difference is the action of a netfilter rule being MASQUERADE instead of ALLOW.
This is what literally everyone here, including yourself, continues to miss. Dynamic source NAT is literally a set of stateful firewall rules that have an action to modify src_ip and src_port in a packet header, and add the mapping to a connecting tracking table so that return packets can be identified and then mapped on the way back.
There's no need to do address and port translation with IPv6, so the only difference to secure an IPv6 network is your masquerade rule turns into "accept established, related". That's it, that's the magic! There's no magical extra security from "NAT" - in fact, there are ways to implement SNAT that do not properly validate that traffic is coming from an established connection; which, ironically, we routinely rely on to make things like STUN/TURN work!
> Dynamic source NAT is literally a set of stateful firewall rules that have an action to modify src_ip and src_port in a packet header, and add the mapping to a connecting tracking table so that return packets can be identified and then mapped on the way back.
Yes, and that _provides security_. Thus NAT provides security. You can say "well really that's a stateful firewall providing security because that's how you implement NAT" and you would be technically correct but rather missing the point that turning NAT on has provided the user with security benefits thus being forced to turn it on is preventing a less secure configuration. Thus in common parlance, IPv4 is more secure because of NAT.
I will acknowledge that NAT is not the only player here. In a world that wasn't suffering from address exhaustion ISPs wouldn't have any particular reason to force NAT on their customers thus there would be nothing stopping you from turning it off. In that scenario consumer hardware could well ship with less secure defaults (ie NAT disabled, stateful firewall disabled). So I suppose it would not be unreasonable to observe that really it is usage of IPv4 that is providing (or rather forcing) the security here due to address exhaustion. But at the end of the day the mechanism providing that security is NAT thus being forced to use NAT is increasing security.
Suppose there were vehicles that handled buckling your seatbelt for you and those that were manual (as they are today). Someone says "auto seatbelts improve safety" and someone else objects "actually it's wearing the seatbelt that improves safety, both auto and manual are themselves equivalent". That's technically correct but (as technicalities tend to go) entirely misses the point. Owning a car with an auto seatbelt means you will be forced to wear your seatbelt at all times thus you will statistically be safer because for whatever reason the people in this analogy are pretty bad about bothering to put on their seatbelts when left to their own devices.
> in fact, there are ways to implement SNAT that do not properly validate that traffic is coming from an established connection; which, ironically, we routinely rely on to make things like STUN/TURN work!
There are ways to bypass the physical lock on my front door. Nonetheless I believe locking my deadbolt increases my physical security at least somewhat, even if not by as much as I'd like to imagine it does.
The difference is that with IPv4 you know that you have that security because there is no other way for the system to work while with the IPv6 router you need to be a network expert to make that conclusion.
Look at this nftables setup for a standard IPv4 masquerade setup
table ip global {
chain inbound-wan {
# Add rules here if external devices need to access services on the router
}
chain inbound-lan {
# Add rules here to allow local devices to access DNS, DHCP, etc, that are running on the router
}
chain input {
type filter hook input priority 0; policy drop
ct state vmap { established : accept, related : accept, invalid : drop };
iifname vmap { lo : accept, eth0 : jump inbound-wan, eth1 : jump inbound-lan };
}
chain forward {
type filter hook forward priority 0; policy drop;
iifname eth1 accept;
ct state vmap { established : accept, related : accept, invalid : drop };
}
chain inbound-nat {
type nat hook prerouting priority -100;
# DNAT port 80 and 443 to our internal web server
iifname eth0 tcp dport { 80, 443 } dnat to 192.168.100.10;
}
chain outbound-nat {
type nat hook postrouting priority 100;
ip saddr 192.168.0.0/16 oiname eth0 masquerade;
}
}
Note, we have explicit rules in the forward chain that only forward packets that either:
* Were sent to the LAN-side interface, meaning traffic from within our network that wants to go somewhere else
* Are part of an established packet flow that is tracked, that means return packets from the internet in this simple setup
Everything else is dropped. Without this rule, if I was on the same physical network segment as the WAN interface of your router, I could simply send packets to it destined to hosts on your internal network, and they would happily be forwarded on to it!
NAT itself is not providing the security here. Yes, the attack surface here is limited, because I need to be able to address this box at layer 2 (just ignore ARP, send the TCP packet with the internal dst_ip address I want addressed to the ethernet MAC of your router), but if I compromised routers from other customers on your ISP I could start fishing around quite easily.
Now, what's it look like to secure IPv6, as well?
# The vast majority of this is the same. We're using the inet table type here
# so there's only one set of rules for both IPv4 and IPv6.
table inet global {
chain inbound-wan {
# Add rules here if external devices need to access services on the router
}
chain inbound-lan {
# Add rules here to allow local devices to access DNS, DHCP, etc, that are running on the router
}
chain inbound-nat {
type nat hook prerouting priority -100;
# DNAT port 80 and 443 to our internal web server
# Note, we now only apply this rule to IPv4 traffic
meta nfproto ipv4 iifname eth0 tcp dport { 80, 443 } dnat to 192.168.100.10;
}
chain outbound-nat {
type nat hook postrouting priority 100;
# Note, we now only apply this rule to IPv4 traffic
meta nfproto ipv4 ip saddr 192.168.0.0/16 oiname eth0 masquerade;
}
chain input {
type filter hook input priority 0; policy drop
ct state vmap { established : accept, related : accept, invalid : drop };
# A new rule here to allow ICMPv6 traffic, because it's not required for IPv6 to function correctly
icmpv6 type { echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert } accept;
iifname vmap { lo : accept, eth0 : jump inbound-wan, eth1 : jump inbound-lan };
}
chain forward {
type filter hook forward priority 0; policy drop;
iifname eth1 accept;
# A new rule here to allow ICMPv6 traffic, because it's not required for IPv6 to function correctly
icmpv6 type { echo-request, echo-reply, destination-unreachable, packet-too-big, time-exceeded } accept;
# We will allow access to our internal web server via IP6 even if the traffic is coming from an
# external interface
ip6 daddr 2602:dead:beef::1 tcp dport { 80, 443 } accept;
ct state vmap { established : accept, related : accept, invalid : drop };
}
}
Note, there's only three new rules added here, the other changes are just so we can use a dual-stack table so there's no duplication of the shared rules in separate ip and ip6 tables.
* 1 & 2: We allow ICMPv6 traffic in the forward and input chains. This is technically more permissive than needs to be, we could block echo-request traffic coming from outside our network if desired. destination-unreachable, packet-too-big, and time-exceeded are mandatory for IPv6 to work correctly.
* 3: Since we don't need NAT, we just add a rule to the forward chain that allows access to our web server (2602:dead:beef::1) on port 80 and 443 regardless of what interface the traffic came in on.
None of this requires being a "network expert", the only functional difference in an actually secure IPv4 SNAT configuration and a secure IPv6 firewall is...not needing a masquerade rule to handle SNAT, and you add traffic you want to let in to forwarding rules instead of DNAT rules.
Consumers would never need to see the guts like this. This is basic shit that modern consumer routers should do for you, so all you need to think about is what you want to expose (if anything) to the public internet.
Instead of all my devices being behind one IP and using an internal IP subnet, now each device has a globally routable ip address that will be used... Cool great opsec.
>This is a terrible argument. First, NAT doesn't provide the security behavior users want.
Try breaking into my machine. Login:pass are administrator:pa$$w0rd, external ip 58.19.1.129, internal ip is 192.168.1.124, the system is Windows xp, and firewall is turned off on both the computer and the box the ISP gave me.
Sure, okay. You're using RFC1918 on the internal network, so I'll need to connect to your router's WAN interface to do it, but after that it's just a matter of doing `ip route add 192.168.1.0/24 via 58.19.1.129` and then connecting to whatever I want.
How do you want to get me onto your WAN interface? Unless you happen to live near me it'd probably be easiest if you give me a tunnel. Alternately, if you change the internal network to a properly-routed non-RFC1918 range, I can demonstrate this over the Internet too.
I offered to do this once before, and the person I was talking to replied with "so, you're refusing to do it then" and blocked me. So just for the avoidance of doubt: I'm offering to do this, but if you're going to provide the test environment, you're responsible for making sure I can actually reach the test environment. Otherwise you aren't going to learn anything about NAT.
Right, and in a similar situation, if the internal device was given a routable ipv6 address by the ISP's cable modem, you could directly access that device.
This isn't a hypothetical. There are ISPs who do this out of the box. I plugged a linux box into my ISP's cable modem/router in Amsterdam and immediately noticed my ssh port was getting hammered by port scanners. This isn't what most customers, especially those who aren't technically sophisticated, expect.
I could do it if it was using a routable v4 address too, and I can do it with either RFC1918 or ULA as well (which are both routable, just not over the Internet) if I can get close enough to send the relevant packets. NAT provides no protection against any of these.
You don't normally see many SSH brute force attempts on v6, let alone getting hammered by them. I do see some, but it's mostly to obvious addresses like <prefix>::2, ::3 etc which I don't use, or to IPs you can scrape from TLS cert logs. If you set an ssh server up on an IP that you don't publicize, finding it is hard.
>How do you want to get me onto your WAN interface?
I've already given you _all_ information you could have realistically squeezed from me. The only thing left for you is to prove that NAT is not a security measure and break into my machine, given that you already have both login and pass.
If you had exactly those parameters with ipv6, you would have already broken in.
And like I said, I can do that if you get me into a place where I can demonstrate it.
If you want me to demonstrate that the lock on your safe isn't doing anything, you have to let me into the room where the safe is. Otherwise you won't learn anything about the lock on the safe.
You could have learned this if you were better about collecting requirements. You can tell the interviewer "I'd do it like this for this size data, but I'd do it like this for 100x data. Which size should I design this for?" If they're looking for one direction and you ask which one, interviewers will tell you.
I've done that too and, in my experience, people that ask a scaling question that fits on a single machine don't have the capacity to have that nuanced conversation. I usually try to help the interviewer adjust the scale to something that actually requires many machines, but they usually don't get it.
Said another way, how do you have a meaningful conversation about scaling with a person who thinks their application is huge, but in reality only requires a tiny fraction of a single machine? Sometimes, there's such a massive gulf between perception and reality that the only thing to do is chuckle and move on.
I had an X1 Carbon like this, only it'd crash for no apparent reason. The internet consensus that Lenovo wouldn't own up to was that the i7 CPUs were overpowered for the cooling, so your best bet is either underthrottling them or getting an i5.
There's something that feels seductive and clever about taking the contrarian, usually pessimist stance—like you're the only one who sees things for how they really are.
Yes, those numbers are real but only in very short bursts of strictly sequential reads, sustained speeds will be closer to 8-10 GB/s. And real workloads will be lower than that, because they contain random access.
Most NVMe drivers on Linux actually DMA the pages directly into host memory over the PCIe link, so it is not actually the CPU that is moving the data. Whenever the CPU is involved in any data movement, the 6 GB/s per core limit still applies.
I feel like you are pulling all sorts of nonsense out of nowhere. Your numbers seem all made up. 6GB/s seems outlandishly tiny. Your justifications are not really washing. Zen4 here shows single core as, at absolute worst behavior, dropping to 57GB/s. Basically 10x what you are spinning. You are correct in that memory limits are problematic, but we also have had technology like Intel's Direct Data IO (2011) that lets the CPU talk to peripherals without having to go through main memory at all (big security disclosure on that in 2019, yikes). AMD is making what they call "Smart Data Cache Injection" which similarly makes memory speed not gating. So even if you do divide the 80GB/s memory speed across 16 chips on desktop and look at 5GB/s, that still doesn't have to tell the whole story.
https://chipsandcheese.com/p/amds-zen-4-part-2-memory-subsys...https://nick-black.com/dankwiki/index.php/DDIO
As for SSD, for most drives, it's true true that they cannot sustain writes indefinitely. They often write in SLC mode then have to rewrite, re-pack things into denser storage configurations that takes more time to write. They'll do that in the background, given the chance, so it's often not seen. But write write write and the drive won't have the time.
Thats very well known, very visible, and most review sites worth a salt test for it and show that sustained write performance. Some drives are much better than others. Even still, an Phison E28 will let you keep writing at 4GB/s until just before the drive is full full full. https://www.techpowerup.com/review/phison-e28-es/6.html
Drive reads don't have this problem. When review sites benchmark, they are not benchmarking some tiny nanosliver of data. Common benchmark utilities will test sustained performance, and it doesn't suddenly change 10 seconds in or 90 seconds in or whatever.
These claims just don't feel like they're straight to me.
6 GB/s may well be in the ballpark for throughput per core whenever all cores are pegged to their max. Yes, that's equivalent to approx. 2-3 bytes per potential CPU cycle; a figure reminiscent of old 8-bit processors! Kind of puts things in perspective when phrased like that: it means that for wide parallel workloads, CPU cycle-limited compute really only happens with data already in cache; you're memory bandwidth limited (and should consider the iGPU) as soon as you go out to main RAM.
> accessing memory mapped device memory instead of using DMA like in the good old days
That's not actually an option for NVMe devices, is it? The actual NAND storage isn't available for direct memory mapping, and the only way to access it is to set up DMA transfers.
Sequential read speed is attainable while still having a (small) number of independent sequential cursors. The underlying SSD translation layer will be mapping to multiple banks/erase blocks anyway, and those are tens of megabytes each at most (even assuming a 'perfect' sequential mapping, which is virtually nonexistent). So you could be reading 5 files sequentially, each only producing blocks at 3GB/s. A not totally implausible access pattern for e.g. a LSM database, or object store.
That doesn’t sound right. A single core should more than fast enough to saturate IOPs (particularly with iouring) unless you’re doing something insane like a lot of small writes. A write of 16mib or 32mob should still be about 1 ssd iop - more CPUs shouldn’t help(and in fact should be slower if you have 2 16mib IOPs vs 1 32mib iop)
Be careful not to confuse using the material and distributing it. There are open legal cases sorting out what fair use means for generative AI. Distribution (seeding in the case of torrents) of this material isn't legal. It got Meta in trouble, and it's getting Anna's archive in trouble.
Apple just reduced Vision Pro production, but Liquid Glass was in motion well before that. What leaves me scratching my head is I never got the impression Apple believed in Vision Pro. It launched because after years of research, management wanted to see if the effort was worth continuing to invest in, but that wasn't a vote of confidence.
I'll have to second this. It's not even on Apple's homepage! I hadn't heard it mentioned for months before today. It had its niche share of users who actually found it useful, but apart from them it seems that the world is not ready for spatial computing (or maybe current spatial computing isn't ready for people, who knows?).
I'm hoping the new Valve headset will be like, 60% of what the Apple vision is. My boss got the Apple vision on launch day and it is really premier hardware, visuals that are almost exactly like seeing the thing you're looking at in real life, and the hand sensing / interactivity was the best I have experienced, even though it still had flaws.
But being tied to Apple's ecosystem, not being really useful for PC connection, and the fact that at least at the time developers were not making any groundbreaking apps for it all makes it a failure in my book.
If Valve can get 60% of that and be wirelessly PC tied for VR gaming then even if they charge $1800 for their headset it will likely be worth it.
I have a vision pro (obtained on day 1 for development purposes), and have given demos of it to a number of non enthusiast/non techie people.
All of them immediately hate that it’s bulky, it’s heavy, it messes with your hair,
messes with your makeup, doesn’t play well with your glasses, it feels hot and sweaty. Everyone wants to take it off after 5-10 minutes at most, and never asks to try it again (even tho the more impressive 3D content does get a “that’s kinda cool” acknowledgment).
The headset form factor is just a complete dud, and it’s 100% clear that Apple knew that but pushed it anyway to show that they were doing “something”.
Exactly. More expensive than a high end desktop or laptop while having less useful software than an iPad. No thanks.
If it were around the $500 point I’d pick one up in a heartbeat. Maybe even $1000. But $3500 is nuts for how little they’re offering. It seems like a toy for the ultra rich.
I assumed the price would eventually come down. But it seems like they’ll just cancel the project entirely. Pity.
I’m assuming Vision Pro is viewed as what the Newton was to the iPhone. It will provide some useful insight way ahead of its time but the mainstream push will only happen after a number of manufacturing breakthroughs happen allowing for a comfortable daily driver UX. Optics and battery tech will need multiple generational leaps to get to a lightweight goggle / sunglasses form factor with Apple-tier visuals, tracking, and battery life…
Magic Leap 2 and HoloLens 2 proved that we still haven't cracked the code on AR/XR. Similar price point, plenty of feasible enterprise use cases for folks willing to pony up money to hire Unity or Unreal devs. And I'm sure there are enough of them tired of being flogged to death by the gaming industry. But they both went splat.
It's going to take a revolution on miniaturization AND component pricing for XR to be feasible even for enterprise use cases, it seems.
It has incrementally improved, and gotten cheaper, to the point that I now see them everywhere. When they first came out, they were pretty expensive. Remember the $17,000 gold Watch (which is now obsolete)? The ceramic ones were over a couple of grand.
But the dream of selling Watch apps seems to have died. I think most folks just use the built-in apps.
The $17,000 Apple Watch was a (rather silly) attempt to compete in the high end watch space. However, they also launched the base "Sport" model at US$349.
Not really anything like the watch, the existence of a stupidly expensive "luxury" version doesn't change the fact that the normal one started at $350.
I think the current rumor is that development of a cheaper XR headset has been shelved in favor of working on something to compete with Meta's AI glasses.
Did they commit to additional production of the Vision Pro? I read their announcement as quiet cancellation of VR products. They announced some kind of vaporware pivot, but I didn't read a single analyst projection that Apple ever intended to bring another wearable to market. Customer usage statistics of the Vision Pro are so low Apple hasn't even hinted about reporting on them.
Wearable products, outside of headphones, have a decade-long dismal sales record and even more abysmal user retention story. No board is going to approve significant investment in the space unless there's a viable story. 4x resolution and battery life alone is not enough to resuscitate VR/AR for mass adoption.
That's probably regional then. In my area most people using watches nowadays are usually into sports.
I must admit I don't understand the point of a smart watch when most people have their smartphone in their hand a significant amount of time a day and said smartphones screen sizes have been increasing over the year because people want to be able to doom scroll at pictures and videos and interact with whatsapp all day. I don't know how you can do that from a tiny screen on a watch.
Those like me who don't subscribe to that way of living don't want distractions and notification so they use regular watches and would see as a regression a device that needs to be charged every few days.
Some people said payments but I see peolle paying with their smartphone all the time since they have it at hands or in a pocket very close anytime having it in a watch doesn't look like a sigmificant improvement. I'd be curious to see a chart of smartwatch adoption by country.
Apple watches have the highest marketshare in a lot of the world's markets. According to this analysis[1], watchOS (Apple watches) make up around half of all smartwatches used in Europe. Global sales puts Apple around 20-30% market share, with brands like Samsung and Garmin around 8% [2]. I haven't found good US-only statistics to show what the market share is of watchOS is, but I'd imagine its probably close to 50% or more.
I do agree though, anecdotal experiences will vary depending on the kind of people you hang out with. For the people I know heavily into running and cycling, brands like Garmin are over represented. Meanwhile lots of other consumers practically don't even know these are options.
Recent moves have convinced me that Apple is getting ready to push Vision Pro substantially harder.
In recent weeks, I’ve been getting push notifications about VP.
They hired Alex Lindsay for a position in Developer Relations.
And there’s the M5 update.
Just remember, it’s a lot cheaper than the original Mac(inflation adjusted). Give it 40 years – hell, given the speed of change in tech these days, it won’t even take 10.
I think they bought the metaverse hype and hurried up. If only they had put half the energy on AI, we'd have a createML with something else than yolov2 in 2026
reply