The legislature is supposed to be the strongest. But it's so busy infighting that more and more power falls to the executive, so that things actually get done.
Something I've noticed is that the SD card corrupts easily, though that may be simply because I'm using a phone charger as the power supply.
I discovered that although the Model B does not support natively booting off USB, you can still put an updated bootcode.bin [1] on the SD card which will enable this functionality. Hopefully my flash drive will not corrupt as easily.
> It's not even clear exactly how our brains work so its hard to imagine that they couldn't be implemented with a sufficiently powerful computer...
Not commenting on what OP said, but I don't think this is correct. Even in principle, how can any computational process produce conscious experiences, which are by nature subjective and unquantifiable?
We don't understand yet, in any formal sense, what consciousness and subjective experience are. It may turn out that they are fundamentally different than mathematics and computation, but the vast majority of scientists believe today that they are not.
The most common belief is that consciousness is simply the self-introspection of a sufficiently powerful computer that can form models of other agents. That is, this computer is able to infer what other agents may do using a model of their inner state, such as beliefs about the world and desires; then, if it applies the same model to itself, it comes up with a similar image of an agent which it calls its own consciousness. Qualia and such are then just illusions, theoretical properties of these models, not fundemantal properties of this world (somewhat equivalent to saying that in fact we are all philosophical zombies, no one has any qualia).
I am not claiming that we know for a fact this is true. But we also don't know anything that can conclusively disprove these hypotheses for now, certainly not something as simple as the idea of qualia.
Calling consciousness “subjective” feels kind of like calling water “wet”. But anyways, life itself is built on computational processes. Most of these processes are “purpose-built” to accomplish very specific tasks, but in humans the development of a massive cortex created something of a general-purpose computer. Consciousness seems to be the necessary compromise to run such an embodied computer on top of all the other functions of the brain that serve to keep us alive.
I don't think you can say that life is built on computational processes unless you use a definition of "computation" that is so vague and all-encompassing that it becomes effectively meaningless.
The Wikipedia definition of "computation" is "any type of calculation that includes both arithmetical and non-arithmetical steps and which follows a well-defined model". But this only makes sense in the context of a designer or observer external to the computation who can identify what that model is and thereby make sense of the output. So you can't say that brain processes are computational, much less life itself, without committing some variation of the homunculus fallacy.
John Searle (famously known for his Chinese Room thought experiment) made this argument in a paper called "Is the Brain a Digital Computer?" [1] He points out that "if we are to suppose that the brain is a digital computer, we are still faced with the question 'And who is the user?'"
A related problem is qualia. There is no computational process that will produce the sensations of colour or sound or touch. At best you will have some representation that requires having actually experienced those sensations to understand it. So a computational process cannot be the basis of or an explanation for those sensations, and therefore consciousness generally.
> I don't think you can say that life is built on computational processes unless you use a definition of "computation" that is so vague and all-encompassing that it becomes effectively meaningless.
I mean, there's nothing physically stopping us from simulating a brain, right? It's a finite object with a finite amount of physical ingredients, and therefore with a finite amount of computing power we can simulate what it does. To me personally, that's a computational process. Maybe that's an overly broad definition of computation, but I think these debates tend to be about whether there is something fundamentally different about "life" (by which I assume you include consciousness). But maybe that's not what you're saying.
> He points out that "if we are to suppose that the brain is a digital computer, we are still faced with the question 'And who is the user?'"
What does that question even mean? I think it seems deep because we humans have a tendency to ascribe some sort of supernatural aura to our lived experience. Life is something incredible but that (at least to my knowledge) is not uncomputable...
> There is no computational process that will produce the sensations of colour or sound or touch.
Got one: the brain!
> At best you will have some representation that requires having actually experienced those sensations to understand it.
I think you're missing the central point, which is that computation is observer relative. Anything can be interpreted as a computational process.
Searle: "Thus for example the wall behind my back is right now implementing the Wordstar program, because there is some pattern of molecule movements which is isomorphic with the formal structure of Wordstar. But if the wall is implementing Wordstar then if it is a big enough wall it is implementing any program, including any program implemented in the brain."
That's why Searle asks "who is the user?" At some point things have to stop being observer relative and have an intrinsic meaning or essence of their own.
> Got one: the brain!
That's circular reasoning. The point is that qualia are not something which, in principle, can be the subject of computation. There is no way to represent the fullness of sensation itself, like the redness of red or the softness of silk, as information. So how can our brains be "computing" it?
> I think you're missing the central point, which is that computation is observer relative. Anything can be interpreted as a computational process.
I see what you're saying, and maybe I am misunderstanding your point, but to me it seems like you've gotten yourself bogged down in wordplay when there is something much simpler going on: say I have a human named Bob from Des Moines, and next to him is a machine constructed to approximate Bob to arbitrary accuracy (this is possible because Bob is a made up of a finite number of particles/wavefunctions). Are you arguing that there's something special about Human Bob? If so, what is your argument for that? The two are "indistinguishable" and by that I mean whatever threshold you have for two things to be "indistinguishable" (practically speaking), you can technically make a reproduction of Bob that satisfies that threshold.
> That's circular reasoning. The point is that qualia are not something which, in principle, can be the subject of computation. There is no way to represent the fullness of sensation itself, like the redness of red or the softness of silk, as information. So how can our brains be "computing" it?
I would argue this is circular reasoning. "There is no way to represent the fullness of sensation itself" -- yes I would argue there is: whatever time-dependent set of physical states make up this "realization" in your brain.
> I think you're missing the central point, which is that computation is observer relative. Anything can be interpreted as a computational process.
This is completely wrong. The opposite is true: computation is a mechanical process, it does not depend on an observer giving it meaning. It's true that the same mechanical process can be interpreted as different computations, but they will have the exact same computational properties (e.g. complexity), only the result will be interpreted differently.
In particular, it is extremely unlikely that the wall is implementing Word Star, because WordStar is a highly structured computation. The wall MIGHT be implementing some very simple additions, and essentially any process is implementing any one step computation.
Presumably the redness of red is VERY hard to communicate fully brain to brain because the experience of it is dependent upon every input and computation before that point however we manage to do it well enough.
Its like saying that some deficits in computability prevent us us from doing arithmetic and therefore from launching rockets successfully at distant targets.
I might never know the exact pattern in my brain of the "redness of red" as experienced by you but it seemed to work well enough for my brain to form a pattern similar enough to communicate thoughts just as the incompleteness or inherent lack of precision of measurement don't prevent the rocket from being launched successfully.
The issue is not whether we can pragmatically communicate the concept of "red" by piggybacking on some (presumed) common experience, but whether that experience of redness itself is information. It is obviously not, and I do not understand why you insist otherwise.
This idea boils down to if you believe the human brain exists purely in physical space. Let's assume it does. There is no free will. Every thought, every neuron, every sense can be represented and is controlled solely by energy and matter. We could record the electrical signals between your optic nerve and your brain, and send those same signals to your brain again in the future. We could recreate what you perceive as red by shocking your brain in the right place at the right time. If we perfectly understood the human brain, the sensation of red would be defined as a sequence of neurons that need to be turned on and off at the right time.
As far as I know, the only thing limiting us from perfectly understanding the brain is our limitations with measuring it. I don't know of any scientific studies that claim the brain exists outside of physical space.
Let's assume the brain doesn't exist purely in physical space. Free will exists. There is something immeasurable and outside of matter and energy that experiences the color red. Sensations are impossible to define because they exist only in this immeasurable world.
I heard about a guy that claimed it was obvious that the origin of lightning and earthquakes were from the gods themselves. I try not to think like that guy.
> If we perfectly understood the human brain, the sensation of red would be defined as a sequence of neurons that need to be turned on and off at the right time.
A sequence of neurons firing is not equivalent to the sensation of red. It doesn't even tell you anything about the nature of the sensation of colour more broadly, or why the sensation of red looks the way it does and not like, say, the sensation of blue or yellow instead.
All you have is a material correlate -- a merely descriptive physical "law".
> A sequence of neurons firing is not equivalent to the sensation of red.
Have you seen videos where people perform experiments on people's brains while they're awake? The subjects experience sensations that are inseparable from their neurons firing.
I would say the sensation of red and neurons firing are the exact same thing to the person experiencing it. It's like saying a flashlight that is on is different than photons traveling away from a light bulb with a battery and a current. They're the same thing to the observer. The sensation of red is caused by and is only possible by neurons firing. The neurons firing causes and only results in the sensation of red. The observer does not know the difference.
> It doesn't even tell you anything about the nature of the sensation of colour more broadly
I don't think seeing red tells us about the sensation of color more broadly either. I think that's a concept created through human discussion, not by our senses.
> or why the sensation of red looks the way it does and not like, say, the sensation of blue or yellow instead.
I was talking to your point of "but whether that experience of redness itself is information". I don't know why red looks the way it does, but I imagine the reason exists in the physical world and we could find out if we understood the brain.
I do think in the future we could activate someone's neurons and have them experience red, blue, and yellow in any combination we want. And we could give someone else the same experience (hypothetically we perfectly understand the brain) by activating neurons in their brain. I think that is perfectly communicating color.
> The subjects experience sensations that are inseparable from their neurons firing.
What does "inseparable" mean? That the sensation occurs at the same time that the neurons fire? That may be true, but it doesn't make them equivalent.
> It's like saying a flashlight that is on is different than photons traveling away from a light bulb with a battery and a current.
They're not the same, for what it's worth. The term "flashlight" conveys a certain intent and structure that "photons traveling away from a light bulb with a battery and a current" does not.
> The sensation of red is caused by and is only possible by neurons firing. The neurons firing causes and only results in the sensation of red. The observer does not know the difference.
The fact that two different phenomena are closely coupled via a cause and effect relationship does not make them the same phenomena.
If you push two magnets together, the fact that the same force causes them to attract or repel does not mean that the motion of the first is literally equivalent to the motion of the second, or that the force itself is literally equivalent to either motion. They are closely correlated, but ultimately distinct.
You just can't avoid the fact that qualitative phenomena do exist in their own right. They can't be explained away using a physical model that assumes from the get go that they don't exist.
Erwin Schrodinger said:
> Scientific theories serve to facilitate the survey of our observations and experimental findings. Every scientist knows how difficult it is to remember a moderately extended group of facts, before at least some primitive theoretical picture about them has been shaped. It is therefore small wonder, and by no means to be blamed on the authors of original papers or of text-books, that after a reasonably coherent theory has been formed, they do not describe the bare facts they have found or wish to convey to the reader, but clothe them in the terminology of that theory or theories. This procedure, while very useful for our remembering the facts in a well-ordered pattern, tends to obliterate the distinction between the actual observations and the theory arisen from them. And since the former always are of some sensual quality, theories are easily thought to account for sensual qualities; which, of course, they never do.
> What does "inseparable" mean? That the sensation occurs at the same time that the neurons fire? That may be true, but it doesn't make them equivalent.
Can a sensation exist without neurons firing? The root of our conversation is the question if a sensation purely exists in the physical world. If it does, then it is possible to measure it. If it doesn't, then that breaks our scientific understanding of the world and would be exciting news.
> They're not the same, for what it's worth. The term "flashlight" conveys a certain intent and structure that "photons traveling away from a light bulb with a battery and a current" does not.
Yes there is no strict definition of a flashlight. Let's use your definition of a flashlight. Is it possible in your mind to separate the concept of a flashlight and your definition? Without "your definition here", the flashlight no longer exists. My point was without firing neurons, the sensation does not exist.
> The fact that two different phenomena are closely coupled via a cause and effect relationship does not make them the same phenomena.
My wording was not the best. My point was that the sensation of red is physically equivalent to neurons firing. How do we measure a sensation? If we cannot measure a sensation, does it exist in the physical world? If it doesn't exist in the physical world, then what does it existence mean to the scientific community?
> If you push two magnets together, the fact that the same force causes them to attract or repel does not mean that the motion of the first is literally equivalent to the motion of the second, or that the force itself is literally equivalent to either motion. They are closely correlated, but ultimately distinct.
I agree that these forces are distinct. We can measure the force of each magnet separately and we can define the motion of one magnet without referencing the motion of the other.
> You just can't avoid the fact that qualitative phenomena do exist in their own right. They can't be explained away using a physical model that assumes from the get go that they don't exist.
What is a qualitative phenomena? I couldn't find information on this term.
If we can't measure a qualitative phenomena in the physical space, what does it mean to exist?
These discussions are normally expositions of how the other party misunderstands reality and or terminology with a dash of if i don't understand it but can vaguely describe it then it must be inexplicable.
Scott Aaronson has, iirc, suggested the idea that the complexity of such an isomorphism could be the distinguishing thing between whether or not something should be said to be computing a particular think. Sounds plausible to me.
I believe that if you could prove that you have an actual isomorphism in the full formal sense of the world, the question about its complexity wouldn't really matter.
However, for a practical claim, it is probably impossible to formally prove that an interpreter function is both bijective between a physical system and a computation (it maps absolutely every possible state of the physical system to exactly one step of a computation).
However, it's important that the following argument can be made: if the evolution of a physical system is isomorphic to a computation of a particular algorithm for solving the traveling salesman problem, and if the phsyical system needs ~1 second for each step, then the system can't go from state A to state B in less than X seconds, where X is the number of steps required by that algorithm to reach those same steps. The actual interpretation of the algorithm or its purpose is not relevant here, the mathematical limits of how the computation happens remain relevant regardless.
That is because you can't find 2 different isomorphisms between the same physical system and 2 different computation that are not isomorphic to each other, if these are actual proper isomorhpisms (bijective) and not just hand-wavy analogies.
It's possible create an interpretation where all of the computation happens in the interpreter instead of the system being interpreted.
With the right algorithm, could interpret the randomly moving particles in a gas as computing conway's game of life or anything else, if the algorithm just disregards everything about the particles and contains instructions that generate the expected results from conway's game of life. In that extreme case, I don't think it's useful at all to claim that the gas particles are simulating conway's game of life.
In an opposite extreme, you could say that the randomly moving particles in a gas are computing the motion of the random movements of particles in a gas. The interpretation algorithm is just "look at the particles at time t. Their locations represent the particles' locations at time t.". It's clear here that the system being interpreted is in fact doing all the computation and that nothing is hidden in the interpreter's work.
One interesting way to try to differentiate these two cases is that if you want the results of a longer-running simulation, then in the latter case, you let the actual system run longer, and the work to interpret it doesn't increase at all. In the former case, if you want to get the results of running conway's game of life for 2000 steps instead of 1000 steps, then it doesn't matter how long you let the gas particles go on for, but you do have to do more work on the interpreting side.
All physical processes that we understand are computational. The sun and Earth for example are a computer which is constantly computing the velocity and position of a two-body system (the sun and the earth). Computation is essentially a mechanical process, in the sense that it requires no interpretation, so the question of 'who is the user of a computer' is completely meaningless.
It is disturbing that a philosopher who writes books about these concepts does not understand even this elementary fact about computation. The whole point of developing computer theory was in fact to rid mathematics of the need for human ingenuity, to find simple mechanical rules that can be followed even by a machine to arrive at the same results that a mathematician would.
Related to qualia and the Chinese room experiment and so on, those are arguments about something we perceive, but they do not describe something that we know for sure is fundamental about the world. They may well be descriptions of an illusion we have. You can't assume the existence of qualia as proof that something can't be computational, it mostly goes the other way around: you would have to prove that qualia are real to prove that something can't be computational.
In this case let us substitute the narrow definition of thought and computation by an imitation human as a process which given the current state of the brain/computer and the current state of the universe induces changes in the brain so as to model the state of the universe both now and in response to a hypothetical possible pool of actions such that the actions of model and world become entwined in a way that could be modeled from the inside as the world being the result of choices and from the outside as choices being the result of the world.
This is true even of a chess program that attempts to model the current and possible states to the chess board in a fashion as to bring about a goal by way of selection of moves.
Suppose we take a very precise process and produce an exact physical copy of you. For being artificial it ought to experience the same sorts of experiences as you. The same ought to be true of a computer simulation of same. The same ought to be true for a variety of increasingly large modifications of the original design. After all if billions of humans can pop out divergent versions of humans who are all conscious it seems hard to argue that you are a unique configuration. In fact if we imagine working for the next 1000 years on producing a better human being that we ought to be able to produce beings who no longer regard us as truly human because we lack both subjective experiences they regard as essential and computational capability. Maybe they can hold a million times more data in their head at once and they regard us as squirrels.
These beings might regard our workings as completely explicable and replicatable in many substrates while regarding their own workings at the far limit of their own understanding as inherently beyond all possible understanding.
Both you and they are probably wrong. Searle was an asshole.
Trivial what you think of as subjective and unquantifiable are simply so because your brain is too complicated to be taken apart while its running to inspect it effectively with our present level of technology. A subjective experience is just how your brain models your own program in order to produce a progressively refined program that will have a higher chance of successful reproduction.
The magical thing you think is beyond comprehension just isn't real.
All you've done is replace subjective experience with talk of modeling one's own program. What you haven't done is shown how the two are in any way equivalent.
How do you go about quantifying the sensory experience of red, then? You can observe that red light has a wavelength of 620 to 750 nm, or that we've assigned it the RGB colour code of #FF0000, but neither fact actually captures or explains the sensory experience. Even trying is a fool's errand, because sensory experiences are inherently qualitative, not quantitative.
That's an assumption. Perhaps if we were able to understand the brain's inner workings, we could see that 'the experience of red' is precisely 'these 3 neurons firing ever 0.0112 seconds at an intensity of X while receiving 0.001 micrograms of serotonin' (completely made up, obviously).
Until we start understanding how the brain encodes and 'computes' thought, we can't really claim to know if it is or isn't simply a computer.
> Perhaps if we were able to understand the brain's inner workings, we could see that 'the experience of red' is precisely 'these 3 neurons firing ever 0.0112 seconds at an intensity of X while receiving 0.001 micrograms of serotonin' (completely made up, obviously).
Even if we knew that a person saw red when such and such neurons fired, the neurons firing would still just be a material correlate. It would be in no way equivalent to or explain anything about the sensation itself.
You are thinking of something similar to the level of today's neuroscience and brain imaging, where indeed we can only establish correlations.
But I am talking about a much more in-depth understanding of the working of the brain, similar to the level of understanding we have of a microprocessor all the way from transistors to the algorithms running on it. If we could understand human thought at a similar level, we MIGHT find out that "the feeling of red" is not fundamentally different than "the understanding that 1 + 1 = 2", and we could come up with quantifications of it in different ways, from the physical representation in the brain to a certain "bit pattern" in the abstract model of the human brain computer.
Note that the argument for qualia is not one that proves the existence of qualia - it is essentially only a definition. We have no reason to believe that the thing which the term qualia describes actually exists in the world, beyond our own personal experience, which is circular in a way. The argument goes "I feel like this thing I'm experiencing is a qualia, therefore I assume that things similar to me also have qualia", which sounds logical enough. But then, "things similar to me" is actually defined in such a way that it basically assumes qualia exist, since an AGI whose internal state we could probe precisely enough to prove that qualia do not exist for it is then assumed to be outside of "things similar to me".
> the level of understanding we have of a microprocessor all the way from transistors to the algorithms running on it
Good example, because the vast majority of people don't understand that. I tried and I still don't, nevermind someone who doesn't even care.
I mean, I know the theory, I know the individual parts, but can't quite fully understand how a complete processor works.
If someone from as early as 1920 would find an advanced robot that is a combination of some Boston Dynamics model and offline/autonomous Google Assistant (so it could walk, listen/talk/reply, and maybe pick stuff up), they would not be able to figure out how its "brain" works. At best they'd have a general idea/theory.
Same thing with our brains and current understanding of it. I believe it is possible to reverse engineer it completely, but not with today's tools.
> If we could understand human thought at a similar level, we MIGHT find out that "the feeling of red" is not fundamentally different than "the understanding that 1 + 1 = 2", and we could come up with quantifications of it in different ways, from the physical representation in the brain to a certain "bit pattern" in the abstract model of the human brain computer.
I guess the idea is that an abstract concept like "the understanding that 1 + 1 = 2" would be easier to "quantify" in the relevant sense than "the feeling of red", but I don't think that's true.
The very concept of a representation presumes an intellect in which that representation is mapped to the underlying concept. No particular physical state objectively signifies some abstract concept any more than the word "dog" objectively signifies that particular type of animal. But our mental states must be able to do so, because denying this would be denying our ability to engage in coherent reasoning and therefore self-defeating. So those mental states can't be "implemented" solely using physical states.
This argument was actually proposed by the late philosopher James Ross and developed in greater detail by Edward Feser. [1] A similar argument -- though he didn't take it as far -- was made by John Searle (of Chinese Room fame). [2]
But in any event, I would reject the notion that any representation of "the feeling of red" is equivalent to the sensation itself.
> Note that the argument for qualia is not one that proves the existence of qualia - it is essentially only a definition. We have no reason to believe that the thing which the term qualia describes actually exists in the world, beyond our own personal experience, which is circular in a way.
Well, I think it is self-evident that qualia exist for me, and that those same qualia demonstrate that there are physical correlates of qualia. I also think there is good reason to think that qualia exist in others because we share the same physical correlates.
Can I completely prove or disprove that others have qualia? No -- not you, not a rock, not an AGI. But I still have the physical correlates, which gives me some basis to draw conclusions.
> A similar argument -- though he didn't take it as far -- was made by John Searle (of Chinese Room fame). [2]
I have read the entire paper - thank you for the link! - and I find it either false or trivial (to use a style of observation from Chomsky). Searle is asserting that computers don't do anything without homunculi to observe their computation, which is patently false. If I create a robot with an optical camera that detects if there is a large object near itself and uses an arm to open a door if so, the system works (or doesn't work) regardless of any meaning that is ascribed to its computations by an observer. It is true that the computation isn't "physical" in the sense that there isn't a particle of 0 or 1 that could be measured, but it is also impossible to describe the behavior of the system without ultimately referring to the computation it performs. So, if Searle is claiming that such a system only works (opens the door) in relation to some observer, then he is obviously wrong. If he is claiming that the physical processes that occur inside the microprocessor and actuators are the real explanation for how the system behaves, not the computational model, then he is in some sense right, but that is trivially true and no one would really contest it.
Furthermore, there likely is no way to actually give an accurate, formal physical model of this entire system that does not also include some kind of computational model of the algorithm it performs to interpret the photons hitting the sensor as an image, to detect the object, to determine if the object is large enough that the door should be opened, to control the actuator that opens the door etc.
Basically, you can look at human beings as black boxes that take in inputs from the environment and produce output. Searle and I both agree that there exists some formal mathematical model that describes how the output the human being will give is related to the input that it gets (including all past inputs and possibly the entire evolutionary history). However, he seems to somehow believe that computation is not necessary as a part of this formal model, which I find perplexing.
His claims that cognitivists believe that if they successfully create a computer mimicking some aspect of human capacity that the computers IS that human capacity seems completely foreign to me, I have never seen someone truly claim something this absurd. At most, I have seen claims that if we have successfully created a computer system mimicking a human capacity, that this constitutes proof against mind/body dualism at least for that particular capacity, which is I think relatively correct, though more formally this should be called evidence against the need for mind/body dualism rather that actual proof.
> because denying this would be denying our ability to engage in coherent reasoning and therefore self-defeating. So those mental states can't be "implemented" solely using physical states.
I don't think this holds water. A computer (the theoretical model) is, be definition, something that can perform coherent reasoning without any special internal state. A physical realization of a Turing machine can "think about" any kind of computational problem and come up with the same answer that a human would come up with, at least in the Chinese room sense. Yet we know that the Turing machine doesn't have any qualia, so why should we then believe that qualia are fundamental to reason itself?
To me, computer science has taken out all of the wind from any kind of qualia-based representation of the human mind.
> But in any event, I would reject the notion that any representation of "the feeling of red" is equivalent to the sensation itself.
This I agree with in some sense - the map is not the thing. Let's assume for a moment that we have an AGI which uses regular RAM to store its internal state. Let's also assume that the AGI claims that it is currently experiencing the feeling of seeing red. We could take a snapshot of its RAM and analyze this, and even show it to another AGI, which could recognize that some particular bit pattern is the representation of the AGI feeling of red. Still, that second AGI would not be feeling "I am seeing red" when analyzing this bit pattern. It could though feel "I am seeing red" if it copied the bit pattern into the relevant part of its own memory, even if its optic sensors were in no way receiving red light.
> If I create a robot with an optical camera that detects if there is a large object near itself and uses an arm to open a door if so, the system works (or doesn't work) regardless of any meaning that is ascribed to its computations by an observer.
Whether the system "works" or "doesn't work" is dependent on what the machine was designed to do, which is not an objective physical fact about the machine. Perhaps the machine was not meant to open the door when an object is detected, but to close it instead, or to do something else entirely; only the designer would be able to tell you one way or the other.
The same is true for all computation, and that is Searle's point.
> A computer (the theoretical model) is, be definition, something that can perform coherent reasoning without any special internal state.
Computers don't actually engage in reasoning, though, for the same reason. A machine is just a physical process, and physical processes do not have determinate semantic content.
Ross and Feser then argue that because thoughts do have determinate semantic content, they are necessarily immaterial, and I think they are correct.
(This argument is unrelated to qualia; I don't think qualia are fundamental to reason itself.)
The machine does the same thing regardless of whether you ascribe meaning to it or not. In this sense it is like the thermostat from Searle's example, which he was claiming computers are not.
This property of determinacy seems ill defined as well. It's basically defined from the assumption that the human mind is immaterial. If a machine and a human both arrive at the same result when posed a question (say, they both produce some sound that you interpret as meaning '42'), by what measure can you claim that one had semantic meaning and the other did not?
The idea of cognitivism is that there is no fundamental difference (even though of course it is very likely that the process by which this particular machine arrived at that result is different from the process by which the human did).
If I stand by a door and open it when big objects come into my field of view, how is that different from a machine doing the same?
And then, if I had a Machine that could converse and act just like a human (including describing its feelings and internal sensations) while doing nothing fundamentally different from our current PCs, by what measure would you say that this machine is 'simualting' a mind and is not in fact a mind in itself? (though of course it would be a different mind than a human would have).
I always find this a bit like “do submarines swim?”
Dunno, don’t care, just want a functional machine. If it gets from A to B, “swimming” is irrelevant.
————
As a matter of philosophy, the typical response is qualia happens to everything, but we only recognize it in things with dense computation and self-awareness similar to ours.
Like a cat or a dog or a whale.
We can maybe see hints of it in birds, insects (colonies), etc.
We’re starting to discover some of the complex signaling pathways in, eg, old growth forests — but they’re so out of scale and unlike us, we have trouble comprehending whatever experience, eg, the giant fungus under Oregon might have.
> Not only is this scenario 1000x worse for freedom than the very very worst that Amazon could do, it's also far more likely at the moment.
Politically motivated refusal of service by Amazon or other Silicon Valley firms is at least an order of magnitude more likely than any coup by the US military, much less a Trumpist one.
Preposterous, the ranks of Q anon and MAGA are filled with ex-military and law enforcement. They flashed their badges at the Capitol police force! We are that close!
Meanwhile, this political shutdown is pure theory, a straw man floated to distract us from the violence that seethes on the platforms used to organize a fascist overthrow of the government.
But even if we take your probability assessment, the damage times the probability places a violent military coup as a HUGE problem compared to Amazon stopping business with someone.
I’m curious what was your estimate of the likelihood of what happened last Wednesday before it happened? Were your priors updated in any way since then?
Many may not support it, but nearly half do including many elected Republican politicians. This isn't a fringe Trump supporter issue, but a major issue within the Republican party and with American conservatism in general. Maybe the perception of conservatives being censored is just a reflection of the reality that they support using violence to obtain their goals at much, much higher rates than liberals do and are just suffering the consequences of their actions?
Thanks for the numbers. Good to see some evidence my hunch was right - I'd say 43% of the sample is "many".
YouGov speculates in their presentation of the data they gathered that perhaps more Republicans approved because they saw the actions as basically peaceful:
But is distribution sufficient to trigger liability?
Printing a letter to the editor isn't authoring it, after all. The newspaper is exercising some editorial control in that letters to the editor are not all printed, but it isn't endorsement per-se, just a judgement that there is some public interest in making it available. Any liability for libel should surely lie with the letter's author rather than the newspaper.
I also note that you ignored the latter half of my question.
> I also note that you ignored the latter half of my question.
In case you weren't aware, the answer, currently, to the second half of your question, is that they aren't liable for a website comment, due to section 230.
As for the first half of your question, the newspaper affirmatively chooses to publish the letter. That's where they get the liability. The website does not, it simply fails to censor it.
What's so surprising about that? It's not so much a "rule" as a statement of political fact. Now that Supreme Court has become a de facto legislative body, no nominee has a chance of getting through when the presidency and Senate are not controlled by the same party.
The reality is that if we had a Democratic President and Senate, nobody on the left would be arguing that we have to hold off until the election. Chuck Schumer would no doubt be insisting that the Senate "do [its] job", as he did in 2016. [1]
It's hard to fault McConnell for doing precisely what he was elected to do -- confirm conservative judges and justices.
Contrary to what some people are implying, support for mandatory decryption is not evidence of technological illiteracy.
From the perspective of these lawmakers, encrypted storage is like a safe. You have the right to store records in a safe to keep them away from prying eyes, but law enforcement has the right to order you to unlock that safe if they have a warrant. You have the same right to store those same records on an encrypted device, but law enforcement has the same right to order you to decrypt that device if they have a warrant.
Since people will sometimes refuse to decrypt a device, even when ordered to do so by a court, these lawmakers want to require OEMs and service providers to maintain control of the keys when they encrypt information on a user's behalf so as to increase the chances that lawful decryption can take place.
Is this a bad policy? Quite possibly. It has certain risks and makes certain tradeoffs, like any other policy. But it is arrogant to assume that anyone who supports it must be ignorant of how encryption works.
If they see it as a metaphor instead of whar it is that still makes it fundamentally ignorance.
Remember "a series of tubes" memes long predating youtube or its many pornographic not-quite-competitors?
It may map to better understanding but it is still ignorant as somebody software proposing applying computer antivirus software style scanning to infectious disease gene scanning of all micro-organisms in the body.
Even if the metaphor is technically correct in some aspects (the microbes being unauthorized executables in a space) the differences are substantial enough that it cannot be called anything but ignorant by those in the know who would point out precisely the current limitations and theoretical impossibilities like "we can't read cell DNA without destroying them currently". In the case of the safe analogy it is essentially impossible for someone to wind up ordered to open a random piece of garbage that is indistinguishable from a safe. Unlike with encryption.
With the safe analogy, I swear there's precedent that, if the security is a physical key, then a court can compel the owner to produce it. But if the safe uses a combination, the court cannot compel its divulgence, since that would violate the fifth amendment protections against being forced to testify against oneself. Encryption "keys", and the passwords from which they are commonly derived, are much more akin to combinations than to physical keys.
I think there might be a circuit split on this issue, but IMO merely divulging a combination or encryption key is not "testimonial" (and therefore not a 5th Amendment violation) except insofar as it admits knowledge of the combination or key itself. But if police can establish separately that you know it, then the "foregone conclusion" exception applies.
If you can point to specific precedent that would be helpful.
The difference is that for the government to come into my house and force me to open my safe:
1) I will both know about it.
2) government will need a warrant.
In the case of my digital data that might be stored on google (or some other third party) I may never know that the government asked google to decrypt my data for them. In the past companies have done so without a warrant.
Maybe the contents of this bill does not work this way. I don't know.
> But it is arrogant to assume that anyone who supports it must be ignorant of how encryption works
No it's not. Because your analogy is, excuse the term, utter bullshit. Producing a safe requires an expert. A government could actually try to force all producers to give them a second key or some backdoor. Producing an encrypted messages requires software. Government has no chance in hell to restrict the distribution of "illegal software". Everyone who supports that narrative is stupid. Period.
Government officials aren't stupid in general, though. So why do they support the fight against encryption? Because they want to read the messages of average Joe, not the messages of Don Heroin or Sheik Al Explosive. They want to know where the next BLM gathering will be, or where the documents about city council corruption leak.
These kinds of abuses must be addressed by Congress, not the courts.