Hacker News new | past | comments | ask | show | jobs | submit | faller's comments login

> So to make this familiar, you're probably used to traditional coordinate vectors in geometry. For example a 3D vector [x, y, z]. This seems sane enough, but is actually somewhat ambiguous.

The concept of a coordinate system is not ambiguous. You have dimensions, and each can be represented as a vector that complies with specific properties, such as linear independence.

> Which cardinal direction map to x, y, z respectively (e.g. is z for up/down or forwards/backwards)?

That's a function of whatever coordinate transformation you wish to apply.

Nevertheless, I vaguely recall from school the concept of an oriented vector space and direct coordinate system, whose definition was something like the cross product of consecutive director vectors resulted in a positive vector (right-hand rule) i.e., the direction of z is determined unambiguously by the direction of x and y.

> Now in geometric algebra we also have oriented basis vectors.

If I'm not mistaken, oriented referential systems are covered in intro to euclidean geometry classes.

> The key difference is that geometric algebra has the exterior product, notated ^. For example, e1 ^ e2 is the exterior product of two oriented basis vectors. You can interpret this as being an oriented basis for the space spanned by e1, e2. And similarly for e1 ^ e2 ^ e3 being an oriented basis of the volume spanned by e1, e2, e3, etc. These are called basis blades.

Sounds like a convoluted way to refer to basic concepts like direction vectors of a direct coordinate system.

Quite bluntly, this all sounds like an attempt to reinvent euclidean geometry following a convoluted way. I mean, what does all this buy you that applying a subset of affine transformations (scaling, translation, rotation) to an orthogonal coordinate system doesn't give you already?


> Quite bluntly, this all sounds like an attempt to reinvent euclidean geometry following a convoluted way. I mean, what does all this buy you that applying a subset of affine transformations (scaling, translation, rotation) to an orthogonal coordinate system doesn't give you already?

It is attempting to reinvent Euclidean geometry, yes. I don't think it's convoluted though.

To give a prime example, take the article we're commenting on: interpolating rotations. Or more generally: interpolating transformations. Just doing this with rotations without suffering gimbal locks already brings you to quaternions. Are quaternions 'convoluted'?

The fact that all objects are native to the algebra means they're composable. Take for example this slide of the formula of a 4D torus in coordinates, and in 4D PGA: https://i.imgur.com/T4hofL2.png The talk in general has a bunch of example applications: https://youtu.be/tX4H_ctggYo?t=4232

Questions such as "the intersection of this line and this plane", "the line through two points" "the circle where these two spheres intersect", "the point at the intersection of three planes", "the projection of this line on this plane" and such are trivial, native (the resulting object is part of the algebra) and exception-free in geometric algebra. E.g. two planes always intersect, it just happens that the intersection is a line at infinity if they're parallel.

The exact same code used to translate and rotate a point around the origin can be used to translate and rotate a line, or a plane around the origin.

Also note that most of computer graphics already realizes that embedding our geometric space into a larger space is useful. Projective geometry (embedding 3D into 4D) is already everywhere, because it unifies translations and rotations into a single concept (matrix multiplication). Geometric algebra simply goes a step further.


> Quite bluntly, this all sounds like an attempt to reinvent euclidean geometry following a convoluted way

I think that's the idea.

> I mean, what does all this buy you that applying a subset of affine transformations (scaling, translation, rotation) to an orthogonal coordinate system doesn't give you already?

Well, algebra: with the geometric product one can solve geometric equations in a nice, unified way.


Can you give an example ifna nice solution to a problem?



Thanks for the last paragraph, I would also like to know. To me it looks as well as linear algebra with a lot of smoke and mirrors.


There's no smoke. But there surely are mirrors. (Group theory joke)


> Hang on, is that a Natural number, or a signed Integer? Is it place-value, and if so what's the base? Big- or little-endian?

I fail to see how that makes any sense. OP clearly referred to the unidimensional nature of a scalar in contrast to the n-dimensional nature of a vector, or the n*m-dimensional nature of a matrix. It makes no sense to try to go off on a tangent regarding, say, the resolution of a scalar representation. Perhaps the concept of a ray gets halfway there. Perhaps the concept of a line at infinity.

The concept of a scalar is familiar, as is the concept of a vector. Using the terms "scalar with a direction" to describe a vector makes no sense if you come from that starting point. Perhaps magnitude+direction vector rings a bell, because that's also a basic description of a vector.

Perhaps the author made a mistake, or does not have a good math background, or is filing in his knowledge gaps by coming up with definitions . Or perhaps he's actually referring to different concepts that the readership is not familiar. Who knows?

> This is tongue-in-cheek; but from a software perspective, all those representations have the same "API" (arithmetic).

This assertion is quite wrong. Both scalar and vector, in this context, are data types. Describing a data types as a data type of a data type is meaningless and does not compute.


> The concept of a scalar is familiar, as is the concept of a vector.

> Using the terms "scalar with a direction" to describe a vector makes no sense if you come from that starting point.

Now I'm confused: what is a vector, if not a "scalar with a direction" (or indeed "an oriented scalar value", as the parent quoted)?

> Both scalar and vector, in this context, are data types. Describing a data types as a data type of a data type is meaningless and does not compute.

I disagree; "scalar value" is an interface, supporting addition, multiplication, negation, etc. (e.g. https://hackage.haskell.org/package/base-4.16.1.0/docs/Prelu... https://smlfamily.github.io/Basis/real.html ). Many data structures can implement this interface (Float, Double, Rational, Fixed, Nat, Int, etc.).

Likewise, "vector" is an interface, supporting vector addition, scalar multiplication, and (crucially in this context) inner, wedge and geometric products. Many data structures can implement this interface (e.g. a tuple of scalars, as coefficient of a fixed orthonormal basis; a scalar length paired with angular spreads projected in fixed perpendicular planes; etc.)


> Why? To solve what exactly? In a sane world, it'd be major.minor, with nothing appended at the end.

It's always important to get acquainted with a topic before succumbing to the desire to mindlessly criticize in ignorance.

A quick search in Wikipedia shows that USB 3.1 specifies a brand new transfer rate, dubbed SuperSpeed+ transfer mode, "(...)which can transfer data at up to 10 Gbit/s over the existing USB3-type-A and USB-C connectors (1200 MB/s after encoding overhead, more than twice the rate of USB 3.0)".

This is a different transfer mode than the SuperSpeed transfer rate specified in USB 3.0.

To allow implementations to support both transfer speeds, the implementations that supported SuperSpeed transfer rates were dubbed USB 3.1 Gen1, while the implementations that supported SuperSpeed+ transfer rates were dubbed USB 3.1 Gen2.

https://en.wikipedia.org/wiki/USB_3.0

To me that's a very convenient, and customer-centric way of putting together a standard. So there's a minor addition to a major release. Is that left to an annex? No. Do we fork standards? No. We just release a backwards-compatible v3.1 standard that in practice deprecates v3.0 and thus allows the whole industry to avoid piling up the list of current standards that we care about.


The problem is that the version of the standard is conflated with the version of the port. Here is a better system:

1) The standard has a version number. I would have suggested semver, except that USB will always be backwards compatible within the same physical port, so there's no need for separate major/minor versions. We can just use an incrementing number 1,2,3 etc.

2) The standard gives each physical port layout a name, eg. A, B, C. Port layouts can be added/removed in new versions of the standard.

3) The standard specifies and names various transfer rates, eg. T1, T2, etc with new names being added in new versions of the standard. (eg. version 1 defines speed T1, version 2 defines speeds T1, T2. etc.)

4) Manufacturers are not allowed to use the version of the USB specification in their marketing material at all. A port is not "USB 3.1", it's "USB C T2".

The version of the spec only serves to confuse customers because the whole point is that versions are backwards compatible. The only thing the customer cares about is the physical port layout and what features/transfer rate are supported over that port.

So the marketing names would be:

USB A T1, USB A T2

USB C T1, USB C T2, USB C T3

As a customer, I can easily see that I can't plug a "USB A T1" into a "USB C T1" because they are different physical ports. I can also see that "USB C T2" is faster than "USB C T1".

Admittedly thing are slightly more complicated because the transfer rate is not the only "feature", we also have to consider what kinds of data can be transferred. We can extend the full marketing name to:

USB C T1 (audio)

USB C T1 (audio,video)

etc.

Obviously this is too long to have on the port itself, so we can stick with just the port type and transfer speed (eg. USB C T2) as the short name.


I think you just recreated what the USB Forum is already doing?

The standard as a whole has a version number that increments with the entire standard document.

Port layouts are named A, B, C (and deprecated Micro-A, Micro-B, and Mini-A).

The transfer rates are now named: Gen 1, Gen 2, Gen 1x2, and Gen 2x2. (These names are stupid, but they are names. Gen 1 is all the transport capabilities from USB 1.0 to USB 3.0 {including the now "classic" "superspeed"} and Gen 2 is new starting with USB 3.1 and the even worse named Gen 1x2 and Gen 2x2 are new starting with USB 3.2).

> Obviously this is too long to have on the port itself, so we can stick with just the port type and transfer speed (eg. USB C T2) as the short name.

Marking the port type is redundant because USB has been good about giving every port type a very different physical silhouette. USB A ports look nothing like USB B ports look nothing like USB C ports. (Arguably there are legitimate complaints that USB Micro-A and USB Mini-A had some visual at a glance issues in practice, but you couldn't insert the wrong cable into the wrong port.)

So yeah, that just leaves finding a better way to mark the ports (and cables!) with the transfer speed. "Gen 1", "Gen 2", "Gen 1x1", and "Gen 1x2" all take up a lot of space and maybe aren't the friendliest names to mark on/near ports/cables, but are in theory potentially the only bit of information that ports need to be marked that cannot be assumed by physical port shape. (ETA: Which the USB-IF Marketing names like SuperSpeed 40 and logos like an arc around the number 40 next to the USB symbol are designed to do, though the fact that people don't recognize them and they don't seem common enough in practice that people know they exist is a marketing failure more than a technical failure of the standards.)


> I think you just recreated what the USB Forum is already doing?

If they were doing this then there wouldn't be any ports described as "USB 3.1" (see point 4). The version of the standard is not just irrelevant but actively misleading to use in marketing material.

> Marking the port type is redundant because USB has been good about giving every port type a very different physical silhouette.

On the port itself, sure, but if I'm buying a laptop online then it's pretty important what type of port it has, so saying "USB C T2" is a lot more useful than just "USB T2". How would I even know what silhuette the port has?

The two most important pieces of information are the type of port and the transfer speed. There are only a small number of possibilities for each, and the latter is a purely numeric value, so a letter/number pair is sufficient.


> If they were doing this then there wouldn't be any ports described as "USB 3.1" (see point 4). The version of the standard is not just irrelevant but actively misleading to use in marketing material.

That gets to the edit I made at the end. The USB-IF Marketing group has never suggested marketing ports/cables as "USB 3.1". They've always preferred the "SuperSpeed {Bandwidth}" branding over "USB {SpecNumber}". There's a chart here: https://en.wikipedia.org/wiki/USB4#USB_3.x_.E2.80.93_4.x_dat...

(Admittedly, they are mixing messages by using "USB4 SuperSpeed {Bandwidth}" as marketing names for "SuperSpeed 20" and "SuperSpeed 40".)

But the fact that just about no one uses the USB-IF Marketing Names and instead reverts to easily confused "USB {SpecNumber}" branding is an interesting marketing failure by USB. (Not necessarily a technical failure of their specs.)

> On the port itself, sure, but if I'm buying a laptop online then it's pretty important what type of port it has, so saying "USB C T2" is a lot more useful than just "USB T2". How would I even know what silhuette the port has?

That's a slight goal post move from your previous comment about what to mark Ports/Cables. Sure, if you need to mark online materials you need to include port types. But it's still redundant on a physical port or cable to mark the port type when you are staring right at the port type.


Adding a new transfer rate seems like a reasonable place to bump the minor version number of a protocol. After reading all of that I'm even more convinced that it should have just been USB 3.2.


> Adding a new transfer rate seems like a reasonable place to bump the minor version number of a protocol. After reading all of that I'm even more convinced that it should have just been USB 3.2.

I'm not sure you read any of that. I mean, they bumped the standard version to 3.1 from 3.0 after adding a new transfer rate.

Also, USB 3.2 was bumped up from 3.1 after adding two new data transfer modes.

I also add that the naming scheme is quite obvious once you start to think about it.

* USB 3.0 only supports the one SuperSpeed data transfer mode.

* USB3.1 was released, and it specifies two distinct data transfer modes: the legacy Gen1 mode and the novel Gen2 mode.

* USB3.2 is released, and it supports four transfer modes: the legacy Gen1 and Gen2 modes from USB3.1, and two new SuperSpeed+ modes which are 2x and 4x faster than Gen2.


But then why rename 3.0 to 3.1 then 3.2? And now with USB4, everything is USB4. If I remember the upcoming standard correctly, your cheap USB-C cable only doing 420 Mb/s (USB 2 speeds) is now USB4! For free!

If a USB 3.0 cable can suddenly become USB 3.1 (or 3.2) overnight, then what's the point of versions? And what's with "Gen #" at the end? Because a consumer is easily going to be able to see that a USB 3.2 Gen 2x2 is better than a USB 3.2 Gen 1 cable? Or maybe the sellers will just not advertise the "Gen #" portion? According to the Q&A section of this Samsung external drive[0], the difference between this and a USB 3.1 drive is nothing but the model number.

</rant>

The USB Consortium has been overrun by marketing that thinks that making things more confusing (read: tricking) is better for the consumer.

[0]: https://www.amazon.com/SanDisk-256GB-Extreme-Solid-State/dp/...


> But then why rename 3.0 to 3.1 then 3.2?

I honestly have no idea what you're trying to ask.

Keep in mind that:

* USB3.0 was released in 2008.

* USB3.1 was released in 2013.

* USB3.2 was released in 2017.

Each standard is standalone, and specifies all of its transfer modes. I wouldn't be surprised if each of these specs also included fixes, and thus technically would represent different specs.


I'm not talking about the standards, but the marketing names. "USB 3.0" speed is now "USB 3.2 Gen 1" (or "USB4 Gen 1") speed just because the USB Consortium said so.


3.0 wasn’t the speed or the feature. It was an engineering spec with a lot of features, optional and required.

3.1 took 3.0’s features and added more optional features to make a larger document.

3.2 likewise.

You are likely thinking the actual feature marketing names. Things like USB-C connectors and Superspeed 20 Gbps. These do not change release to release. They also might require conformance testing to use those names.

I actually blame the current mess on PC motherboard manufacturers for wiring up a crapload of non-conforming ports, like a “USB-A Gen 2x2” with a red plastic tab. IMHO thy did this because nobody wanted to take the risk of actually pushing toward USB-C. It left them without a way to use a certified/marketing name, hence pretending engineering names were appropriate.


> I'm not talking about the standards, but the marketing names. "USB 3.0" is now "USB 3.2 Gen 1"

No, it's not.

If you implement it from the legacy USB 3.0 spec then you don't care about it. It's SuperSpeed, and that's it.

If instead you implement it to comply with the USB3.1 spec then you have two separate transfer modes specified in the 3.1 standard: the legacy Gen1 and the newly-added Gen2.

If instead you implement it based on the USB 3.2 spec then that standard specifies four distinct transfer modes: the Gen1 specified in USB3.2, the Gen2 specified in USB3.2, and the new ones.

> just because the USB Consortium said so.

Who exactly do you think the USB consortium is? I mean, how do you think a standard is put together?


> No, it's not.

Yes, it is.

> If instead you implement it to comply with the USB3.1 spec then you have two separate transfer modes specified in the 3.1 standard: the legacy Gen1 and the newly-added Gen2.

No. If your device only supports 5 Gb/sec speeds, it's USB 3.0, yes. "SuperSpeed" and all that jazz. But with USB 3.2, it's now (magically) USB 3.2 Gen 1[0]:

> Under this rebranding, the standard previously known as USB 3.0 or USB 3.1 Gen 1 will now be called USB 3.2 Gen 1. Furthermore, the standard previously known as USB 3.1 Gen 2 will now be renamed to USB 3.2 Gen 2.

Yes, there's different transfer speeds, but if you only support 5 Gb/sec, you're a "Gen 1" device. If you're arguing that implementing USB 3.1 mandates support of the 10 Gb/sec mode, you're wrong. If that was the case, there'd be no point of this "Gen" nonsense because a 20 Gb/sec device would just be "USB 3.2" and a 5 Gb/sec device would be "USB 3.0".

Remember the whole debacle a few weeks ago about HDMI 2.1 essentially just being HDMI 2.0? Why would they do that other than to confuse? The only reason for this (USB) stupid naming is to confuse consumers into thinking that their 5 Gb/sec device is "top of the line" because it supports "USB 3.2" or "USB4".

For example, here's a "USB 3.2 Gen 1" flash drive.[1] It's a 5 Gb/sec flash drive, but it's 3.2 instead of the more appropriate 3.0. Why? To confuse.

> Who exactly do you think the USB consortium is? I mean, how do you think a standard is put together?

I think it's a consortium of companies. Many of which have marketing teams. And I'm right.[2]

[0]: https://www.msi.com/blog/new-usb-standard-usb-3-2-gen-1-gen2...

[1]: https://www.amazon.com/SanDisk-128GB-Ultra-Flash-Drive/dp/B0...

[2]: https://www.usb.org/members


> No. If your device only supports 5 Gb/sec speeds, it's USB 3.0, yes.

That's not how things work.

Devices are implemented while targeting a standard.

If you implement a USB 3.0 device then you do not support any data transfer mode capable of doing more than 5Gb/s. If you're a customer looking for more than 5Gb/s and you see that a device is only USB3.0 then you already know that it won't cut it.

That's the whole point of this submission. M1 macs don't support USB 3.1, only USB 3.0. Why? because they patently don't support the transfer speeds made possible by the new data transfer mode introduced in USB 3.1.


> That's the whole point of this submission. M1 macs don't support USB 3.1, only USB 3.0. Why? because they patently don't support the transfer speeds made possible by the new data transfer mode introduced in USB 3.1.

M1 macs support USB4.

USB specs define multiple transmission modes and speeds from port to port over a cable that one can support. They define alt modes you can support.

Separately there are conformances and marks. E.g. if your cable supports transfer according to USB 3.2 Gen 2x2 in our lab, you can _market it_ as Superspeed 20Gbps, put the logo on the connectors, etc.

So the argument would be that Apple M1 doesn’t support Superspeed 10Gbps.

Which, as an aside, I’ll need a lot more than one person testing with a single (likely non-conformant) cable before I will believe.


It's the name of the standard. I'm not sure that those names were ever meant to be user-facing, but unfortunately they are. If device-makers choose to support a newer standard (say 3.2), that standard needs to support older speeds (Gen 1), in addition to newer speeds (Gen 2).


But USB 3.0 supported USB 2.0 and 1.0/1.1 speeds already without this "generation" garbage. If I plugged a USB 3.0 cable (9 pins) into a USB 2.0 (4 pin) hub, the device still worked at the lower speeds. I could even plug it into a USB 1.1 hub, and it would just work. I didn't need "USB 3.0 Gen 4"[a] (3.0) to know that it would work at "USB 3.0 Gen 2"[a] (1.1) or "USB 3.0 Gen 3" (2.0) speeds.

[a]: Made up names; USB 3.0 didn't have this mess


Sure it did.

You have USB 3 Low speed and Full Speed (aka usb 1), USB 3 High Speed (aka usb 2), and USB 3 SuperSpeed.

Expecting the USB consortium to give things useful names or at least let them keep their names we got used to is the same madness as expecting a singular useful version number from anything Sun derived.

Anyway, according the article everything links at USB 3.1 Gen 2 SuperSpeed+, but then usually doesn't send data at anywhere near the link rate, so that's not an extra layer of confusing.


That was a different mess that people ignored entirely.

The x.y numbers were not a mess until 3.1


> But USB 3.0 supported USB 2.0 and 1.0/1.1 speeds already without this "generation" garbage.

No, not quite. What do you think the USB3.0 SuperSpeed is? Why, a brand new transfer mode.

> If I plugged a USB 3.0 cable (9 pins) into a USB 2.0 (4 pin) hub, the device still worked at the lower speeds.

You'd be glad to know that nothing changed in that regard with USB3.0, 3.1, and 3.2.

In fact, the whole point of this submission is to showcase how M1 macs are only capable of drawing a lower data transfer speed unlike the new Mac Studio, thus proving that the M1 macs don't support USB 3.1 Gen2, aka SuperSpeed+.


You keep dancing around my arguments. The issue isn't that things have changed; it's that they've changed in a way that makes things confusing for consumers. Go ask a random person on the street which is better: "USB 3.2 Gen 1 or USB 3.0?" I guarantee you'll find people thinking "USB 3.2 Gen 1" is better because it's a bigger number. But despite that, they're the exact same thing: 5 Gb/sec ("SuperSpeed").


> You keep dancing around my arguments.

No, not really. Feel free to point out exactly which argument you feel was ignored.

> The issue isn't that things have changed; it's that they've changed in a way that makes things confusing for consumers.

That seems to be the source of your confusion: nothing has changed. Each USB spec is backwards compatible and specifies the same data transfer modes.

And there is no confusion: if you pick up a USB2 data storage device you know beforehand it won't support SuperSpeed. If you pick up a USB3.0 device you know beforehand it won't support SuperSpeed+. If you pick up a USB3.1 device you know beforehand it won't support SuperSpeed+ 2x or 4x.

The whole point of the submission is to call out that M1 macs don't support USB3.1 unlike the new Mac Studio.

The article also clearly states that Apple doesn't actually advertise USB3.1, just USB3.


> Feel free to point out exactly which argument you feel was ignored.

The retroactive renaming of speed+versions. I'm not talking about the Mac.

> If you pick up a USB3.1 device you know beforehand it won't support SuperSpeed+ 2x or 4x.

My whole argument is that this confusion wouldn't be an issue if the USB Consortium had reserved USB 3.1 for 10 Gb/sec speeds exclusively. In other words, this:

    3.0: 5 Gb/s  "SuperSpeed"
    3.1: 10 Gb/s "SuperSpeed+"
    3.2: 20 Gb/s "SuperSpeed++"
That, and that alone (with none of the "Gen" nonsense) would avoid confusion. Then, if I pick up a USB 3.1 device, I would know it's 10 Gb/sec "SuperSpeed+" without having to use a stupid "generation" number. But no, the USB Consortium decided to deprecate 3.0 and 3.1 because all new devices are "3.2 Gen whatever". That's confusion.


Versions are not speeds.

3.2 continues to describe everything in 3.0, which means it continues to describe how to make devices supporting 5 gbps over USB-A/B


Well, the argument is that versions not being speeds anymore is the problem and it would've been easier if they were. Like they are in Wi-Fi for example.


If you use the smallest version number that fits your device, then you avoid confusion.


Your vendor should never have said 3.0 or 3.1 or 3.2. They should have said Superspeed.

There’s no point complaining that the engineering spec versioning strategy is confusing, when no consumers should have been exposed to it. The problem is squarely on manufacturers and the tech press.


The Mac Studio is an M1 Mac, so you might want to rephrase that part.


Huh?

> the implementations that supported SuperSpeed transfer rates were dubbed USB 3.1 Gen1, while the implementations that supported SuperSpeed+ transfer rates were dubbed USB 3.1 Gen2.

This is supposedly better than USB 3.0 (original standard), USB 3.1 (new standard, same SuperSpeed as USB 3.0), and USB 3.2 (new standard and also SuperSpeed+).


> This is supposedly better than USB 3.0 (original standard), USB 3.1 (new standard (...)

Not quite.

* USB 3.0 specifies SuperSpeed. No need to go on about GenX given it's the first one introduced by USB3 is there?

* USB 3.1 specifies two data transfer modes: Gen1 (the one introduced in USB 3.0) and Gen2 (the fancy new mode just introduced).

* USB 3.2 specifies the Gen1 and Gen2 modes from USB3.1, and adds two additional modes.


USB 3.0 also specifies lower speeds. They didn't need to use "Gen" then, and nothing changed to make them need it after.

Nobody cares if multiple speeds are "introduced by USB3". If 3.0 introduces one speed, and 3.1 introduces a different speed, people can understand that just fine.

Even if you do want to focus on "introduced by USB3", then you just need "3.[generation]" or "3 Gen [generation]". Not "3.[spec revision] Gen [generation]"


This is interesting, because I see your point and this is a good breakdown of the current naming scheme

… but it still seems indefensible. This comment almost reads like satire. I know standards are hard, really hard, but this seems indefensible. Especially the Superspeed -> Superspeed+ (this is ridiculous). Will this get simplified with USB4?


Wait, am I reading this right?

The old transfer mode was

> Superspeed

But the new mode is different, it’s name is

> Superspeed+

I’m sorry, I take your point in the first paragraph but I can’t find a way to wrap my head around how this system helps anyone.


Found the USB-IF member.


> 1. Uber is actually a higher cost/less efficient producer of urban car services than the taxi companies it has driven out of business

This doesn't seem to be true, given that in some countries you have taxi companies providing services through Uber, as well as their own ride hailing platforms.

> 2. Individual Uber drivers with limited capital cannot acquire, finance, maintain and insure vehicles more economically than Yellow Cab

I'm not sure this belief holds any truth as well. I mean, isn't the biggest cost associated with Yellow Cab the taxi medallion, which represents a +$80k additional charge over the vehicle?


It's a one time charge and also can be resold if needed.


> It's a one time charge and also can be resold if needed.

It's a hefty one-time charge that is not required to operate a Uber and thus can baloon the initial investment between 2x and 3x, and at best is capex that you have to tie down. Therefore, how is that an advantage?


> Since then, what surprises me is that this project continues to be a useful, possibly necessary tool for measuring and tuning Lambda performance.

Is performance tuning a relevant topic for AWS Lambda though? It's my understanding that lambdas are recommended only for:

* glue code for AWS events,

* Run fire-and-forget workers that are neither that complex nor executed often enough to justify the work of putting up a dedicated service, which already takes no work at all.

None of these use cases is exactly performance-critical. To put it differently, if any lambda starts to handle enough executions that performance and cost starts to become a concern, the standard approach is to just handle it in a service.

Beyond the choice of the lambda runtime and how much RAM is provisioned, what else is there that's worth being tuned?


> Is performance tuning a relevant topic for AWS Lambda though?

Lambdas are also used for request/response workflows, not necessarily just async background tasks.

Another use case is lambda resolvers with AppSync if you need some sort of data that can't be obtained with the native resolvers (i.e. DynamoDB)

> Beyond the choice of the lambda runtime and how much RAM is provisioned, what else is there that's worth being tuned?

The amount of RAM being allocated to a lambda function also controls the vCPU granted to the lambda function.

There's a nice balance that can be struck with power tuning where you're paying more per millisecond for a higher RAM configuration but the duration of each invocation improves enough to the point where you're actually paying less.


> Lord knows they're heading in that direction with the WHO.

WHO as in World Health Organization? If so, can you please point out your rationale for your link between the WHO and "an unelected world government"?



> As long as you haven't made the active window full screen, or the window you want to switch to, then yes, it works. But it is painful :P

What's being described as "full screen" windows on macOS is not exactly that, and it's very different than full screen windows on Windows or the standard Linux window managers.

On macOS, when we click on the little green button on top of a window, that window becomes full screen *and* becomes a new single-window workspace.

https://support.apple.com/guide/mac-help/use-apps-in-full-sc...


You can "Maximize" the app (to put it in Windows parlance) by Option+Click the green button (you can observe during the hover state for the green button the change from "full screen" to "maximize" by pressing Option).


Or even just double-click the window‘s title bar. Just like on Windows


> But are these people going to jail?

The first paragraph of the news piece states that "the head of the department responsible for Ukraine was sent to prison."

Here's the second paragraph:

> In a sign of President Putin’s fury over the failures of the invasion, about 150 Federal Security Bureau (FSB) officers have been dismissed, including some who have been arrested.

I couldn't read more paragraphs as the article seems paywalled.


Someone posted an archive link around the paywall, like I said there are two confirmed arrests.


Yes, arrested, not fired


So two "purged," 148 fired.


If you really want to win this discussion, no problem, you win. There you go, now you can carry on with your day.


> I think "annoying" is a better description.

Tying your personal health and that of your loved ones to your employer gives your boss unduly leverage over yourself, particularly when there are life and death decisions to be made.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: