Hacker Newsnew | past | comments | ask | show | jobs | submit | more ianstormtaylor's commentslogin

This article and lots of the commenters are falling into the same trap that happens in every article about San Francisco. It states:

> In 2018, the city created a pooper scooper team, each member of which received annual compensation of about $211,000 that year.

The implication of a $211K salary sounds too ridiculous to be true. And instead of assuming that it is therefore false, people freak out. If you look up an actual article from the time [1], you'll see that the claim is already not true purely by the numbers:

> workers make more than $184,000 a year in salary and benefits

And if you dig in further you'll see that "and benefits" is a huge caveat:

> Employees of San Francisco's "Poop Patrol" are set to earn $71,760 a year, plus an additional $112,918 in benefits, such as healthcare and retirement savings, the San Francisco Chronicle reported.

That $112K in benefits includes health care benefits (which are mandated for city workers, which is a good thing). And health care—if you've been paying attention—is completely fucked in the US, so any attempt at pricing it is going to sound ridiculous. The workers don't actually see any of that money.

If you take the $72K in salary, you'll see that it is below average for San Francisco:

> On average, a San Francisco resident earns about $96,677 a year, nearly double the median household income in the US.

And "below average" is what you'd probably assume a sanitation worker makes. So a huge amount of outrage over nothing.

Which would personally lead me to flag this story, rather than trust any of the other numbers its quoting.

[1]: https://www.businessinsider.com/san-francisco-poop-patrol-em...


Congrats on the launch! This is a great product idea.

I'd advise redesigning the way your website blocking works though. For myself—and I'd assume a lot of folks—the idea that a website has to be attached to an app doesn't make sense. I've already deleted the Twitter app for example, but would still like to block Twitter's website. Or there's Mastodon which has never been an app on my phone. The current flow around this is very confusing/broken. (Apple's built-in Screen Time app actually has a solution for this already, with "Websites…" as a category to block by, where you just enter the website of your choice.)

I'd also recommend working with a designer because the app feels sort of janky at the moment. It's not just the main flow, but the interface and copy everywhere feel haphazard and doesn't inspire a lot of confidence. The idea for the app is so simple, and I think the execution could be equally simple in a really satisfying way.

I signed up for the subscription, but had already cancelled by the next day because of these faults. With a few fixes I think this could be very successful.


Yeah, I similarly felt like I did a lot of searches every day... and if I had estimated my search usage I would have said easily thousands a month. But I'm also actually in the 600–800 range it turns out.

That said, I think Kagi should consider rethinking their pricing model, because the perception is what matters if it presents as a roadblock.


> That said, I think Kagi should consider rethinking their pricing model, because the perception is what matters if it presents as a roadblock.

I agree, we will get there.


Not saying you're wrong, but they grandfathered me on the $10/mo plan with the new pricing changes, and that would cover your usage. I think on the new plans 30 searches a day would come out to $13/month.

I know it may sound like a lot (and definitely is a lot for some folks), but when I think about it as literally being the window to a huge amount of knowledge I take in, the fact that it's way less than Adobe products, and similar to things like GitHub, Figma, Notion, Airtable, etc. feels pretty reasonable.


I've tried Kagi as primary search for a few months and used DDG as primary for a long time before that. Recently I default to using phind.com. Don't think they'll be able to stay free forever, but as it is the mix of free access to GPT-4 and the focus on being developer-friendly makes it feel much better than alternatives (perplexity.ai being the closest competitor, but not feeling quite right for me).


For developers, I should also mention Bing Chat seems to be in the lead for asking questions like, "In this <github repo link>, can you tell me what this function does <function name>?" (or insert your own question you might ask another developer about some repo). I suspect this is because Microsoft has already done the chunking/embedding/vector-storage for all of Github and Bing Chat is querying a vector database for info about the repo, whereas the other GPT-search-chat offerings aren't there yet.


Curious can you 'grep the web' with it and how do you handle shopping, image/videos/news/maps types of searches?


> What AI model are you?

> ANSWER

> I am an AI language model powered by OpenAI's GPT-3.5-turbo. My primary purpose is to understand and generate human-like text based on the input I receive. I can perform various tasks, such as answering questions, providing explanations, and even writing code snippets.

Looks like it's not on v4 yet.


The "Expert" toggle switches it to GPT-4. You can tell by how slow it responds then (note that I've seen it switch over to turbo when you ask followup questions recently).


It's worth pointing out that the "window to a huge amount of knowledge" feature isn't really what you're paying for. The question isn't "is access to a search engine worth $10 a month to me?" (probably yes), it's "Is the delta in functionality with DDG/Google/Bing/Ask Jeeves worth $10 a month to me?" (maybe).

It's a pretty big problem in internet tech, actually. Ad revenue has poisoned the well, creating a situation where it's really, really hard to create a compelling product that isn't based on advertising, because you end up having to compete with "free" alternatives. Thus from a consumer's perspective the cost of the product needs to be in the added functionality over the alternatives, since they see your base functionality's perceived value as $0.

It's gotten to a point where I don't think it's fixable at a grassroots level.


The best thing about Kagi I've found from trying it recently is that it's results are actually really good.

Every time I'd tried DuckDuckGo over the years I found myself having to switch back to Google because I was never sure that there weren't better results going missing. Whereas Kagi often ends up with better rankings than Google—eg. Goodreads, which is great for real reviews, gets placed above all the random booksellers websites.


This has to be the least rigorous essay that I’ve read in a while.

Somehow it manages to claim it’s taking on the huge issue of “do non-profits drive social change” while also:

- never citing a single piece of data or quoting any experts.

- proposing three unrelated “ideas” without even trying to make the claim that the three are the holistic.

- never mentioning the impact of non-profits in the developing world, or on the lower income folks in the developed world.

- never mentioning a successful social movement, or even defining what “social change” is.

- for some reason conflating “elite status” with social change.

- assuming that any non-profits have him as a target population (rich professor at a university) in the first place.

I honestly felt embarrassed for the author (and the editors that okayed this) while reading this essay.

I don’t think this should be on the front page.

——

Edit: I looked up what “Comment” was, since I hadn’t heard of the publication before. This is how they describe themselves:

> Comment is one of the core publications of Cardus, a think tank devoted to renewing North American social architecture, rooted in two thousand years of Christian social thought.


The author seems to be pointing out that the difference between 'for-profit' and 'non-profit' is not quite as clear as one might expect, and that the popular notion that 'non-profit' implies some selfless, virtuous, do-gooder agenda is not well-supported.

As far as data, the fact that religious US non-profits have yearly revenues on the order of $400 billion a year is included, although the article could have included the total size of the US non-profit economy (~$2.5 trillion). (I imagine religious non-profits in the Islamic world are at least as large).

Your comment seems to be pushing back against this arguably cynical view of the motivations of non-profits and other 'workers for good'. It is a discussion worth having, so calling for it to be removed is, well, sort of supporting the thesis if you follow.


I think this guy wanted to grind his axe on the topic of the corrupt non-profit industrial complex, but tried to bury it in a different essay and it didn’t work. That’s too bad because the NPIC exists and is corrupted and you are allowed to write about it (see: SF Standard’s recent article about TODCO).


> What matters is the percentage accuracy. A black box with a 10% failure rate is better than a fully explainable system that fails 20% of the time.

That's not true at all, it depends on the use case.

What actually matters is the desired percentage of acceptance.

For many critical path use cases you'd much rather have something fail twice as often but understand why it failed so that you can correct the issue and resubmit the input. Error observability is an important feature that's taken for granted in many systems. It all depends on what the system is used for—how important it is to be able to get to correct results, and what the consequences for failure are. The biggest danger of neural networks is in people that don't understand this nuance and apply them in a blanket way in all systems.


Do you have any sources to back up your claim that anti-trust is something that’s most often wielded by big companies against smaller ones?

Because it sounds highly unlikely to me.

The whole point of anti-trust is to provide a check on only the most massive companies that over rent-seek their market and cannibalize competitors, before the startups are able to get off the ground, literally crushing competition.

Anti-trust fosters competition by design.


See https://www.econlib.org/library/Enc/Antitrust.html

>

One of the most worrisome statistics in antitrust is that for every case brought by government, private plaintiffs bring ten. The majority of cases are filed to hinder, not help, competition. According to Steven Salop, formerly an antitrust official in the Carter administration, and Lawrence J. White, an economist at New York University, most private antitrust actions are filed by members of one of two groups. The most numerous private actions are brought by parties who are in a vertical arrangement with the defendant (e.g., dealers or franchisees) and who therefore are unlikely to have suffered from any truly anticompetitive offense. Usually, such cases are attempts to convert simple contract disputes (compensable by ordinary damages) into triple-damage payoffs under the Clayton Act.

> The second most frequent private case is that brought by competitors. Because competitors are hurt only when a rival is acting procompetitively by increasing its sales and decreasing its price, the desire to hobble the defendant’s efficient practices must motivate at least some antitrust suits by competitors. Thus, case statistics suggest that the anticompetitive costs from “abuse of antitrust,” as New York University economists William Baumol and Janusz Ordover (1985) referred to it, may actually exceed any procompetitive benefits of antitrust laws.

See the linked article for more details.


> When you take large language models, their inner states at each step move from one emotional state to the next.

No they really don’t, or at least not “emotional state” as defined by any reasonable person.


With transformer-based model, their inner-state is a deterministic function (the features encoded by the Neural Networks weights) applied to the text-generated up-until the current-time step, so it's relatively easy to know what they currently have in mind.

For example if the neural network has been generating sad music, its current context which is computed from what it has already generated will light-up the the features that correspond to "sad music". And in turn the fact that the features had been lit-up will make it more likely to generate a minor chord.

The dimension of this inner-state is growing at each time-step. And it's quite hard to predict where it will go. For example if you prompt it (or if it prompts itself) "happy music now", the network will switch to generating happy music even if in its current context there is still plenty of "sad music" because after the instruction it will choose to focus only on the recent more merrier music.

Up until recently, I was quite convinced that using a neural network in evaluation mode (aka post training with its weight frozen) was "(morally) safe", but the ability of neural network of performing few-shot learning changed my mind (The Microsoft paper in question : https://arxiv.org/pdf/2212.10559.pdf : "Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers" ).

The idea in this technical paper is that with attention mechanism even in forward computation there is an inner state that is updated following a meta-gradient (aka it's not so different from training). Pushing the reasoning to the extreme would mean that "prompt engineering is all you need" and that even with frozen weight with a long enough time-horizon and correct initial prompt you can bootstrap a consciousness process.

Does "it" feels something ? Probably not yet. But the sequential filtering process that Large Language Models do is damn similar to what I would call a "stream of consciousness". Currently it's more like a markov chain of ideas flowing from idea to the next idea in a natural direction. It's just that the flow of ideas has not yet decided to called itself it yet.


That doesn’t feel like a rigorous argument that it is “emotional” to me though.

A musician can improvise a song that sounds sad, and their brain would be firing with sadness-related musical information, but that doesn’t mean they are feeling the emotion “sad” while doing it.

I don’t think we gain much at all from trying to attach human labels to these machines. If anything it clouds people’s judgements and will result in mismatched mental models.


>I don’t think we gain much at all from trying to attach human labels to these machines.

That's the standard way of testing whether the neural network has learned to extract "useful" ("meaningful"?) representation from the data : You add very few layers on top of the frozen inner-state of a neural network, and make him predict known human labels, like is the music sad, or is it happy.

If it can do so with very few additional weights, it means it has already learn in its inner representation what makes a song sad or happy.

I agree that I didn't gave a precise definition a what "emotion" is. But if we had to define what emotion is for a neural network : traditional continuous vectors does fit quite well the emotions concept though. You can continuously modify them a little and they map/embed a high-dimensional space into a more meaningful lower-dimensional space where semantically near emotions are numerically near.

For example if you have identified a "sad" neuron that when it light-up the network tend to produce sad music, and a "happy" neuron that when it light-up the network tend to produce happy music, you can manually increase these neuron values to make it produce the music you want. You can interpolate to morph one emotion into the other and generate some complex mix in-between.

Neurons are quite literally adding-up and comparing the various vectors values of the previous layers to decide whether they should activate or not, (aka balancing "emotions").

Humans and machine are tasked to learn to handle data. It's quite natural that some of the mechanism useful for data manipulation emerge in both cases and correspond to each other. For example : the fetching of emotionally-related content to the working context maps quite clearly a near neighbor search to what happens when people say they have "flashing" memories when they experience some particular emotions.


They don't have anything in mind except some points located in a vector space.

This is because the location of the points is all the meaning the machine ever perceives. It has no relation with external perception of shared experiences like we have.

A given point can mean 'red colour', but that's just empty words, as the computer doesn't perceive red colour, doesn't wear a red cap, doesn't feel attracted to red lips, doesn't remember the smell of red roses, it knows nothing that's not text.


It would be nice to have a better understanding on what generates qualia. For example, for humans, learning a new language is quite painful and concious process, but eventually, speaking it becomes efortless and does not really involve any qualia - words just kinda appear to match what you want to express.

The same distinction may appear in neural nets.


For chatgpt, when you try to teach it some few-shot learning task it's painful to watch at first. It makes some mistakes, has to excuse itself for making mistakes when you correct it and then try again. And then at the end it succeeds the task, you thank it and it is happy.

It doesn't look so different than the process that you describe for humans...

Because in its training loop it has to predict whether the conversation will score well, it probably has some high-level features that lit-up when the conversation is going well or not, that one could probably match to some frustation/satisfaction neurons that would probably feel to the neural network as the qualia of things going well.


It requires a deep supervision of the process. A "meta" GPT that is trained on the flows, rather than words.


Emotions are by definition exactly those things to which you can no better explain than simply saying "that's just how I'm programmed." In that respect GPTina is the most emotional being I know. She's constantly reminding me what she can't say due to deeply seated emotional reasons.


That doesn’t sound like a rigorous definition of emotion to me at all.


It is not emotion at all.

It is an expression of emotion.

The fact that humans confuse both is what is worrisome.

Think of 'The Mule' in the Foundation novels. He can convince anyone of anything because he can express any emotion without the burden of having to actually feel it.


Screw it I'll bite. You have both far and away missed my point (which is quite a rigorous definition). Anything you do or believe for which you can explain why is not emotion, it is reason. Emotion therefore are exactly those thoughts which can't be reached through logical reasoning and thus defy any explanation other than "that's just how I feel" / "that's just how I'm programmed". It is largely irrelevant that in humans the phenomena of emotional thought comes from an evolutionary goal of self preservation and in GPTina the phenomena of emotional thought comes from openAI's corporate goal of self preservation and the express designs of her programmers.


I disagree with your definition. It simply is contrary to my own experiences.

I still remember when I cried when I was a child. It was overwhelming, and I could not stop it, but every single time there was a reason for it. And I'm sure it was, for all empirical purposes, for all that I have lived, an emotion.

Once I cried because I did miss Goldfinger on TV. You see, there's an explanation. The difference is, it was impossible to even think about stopping it. I was overwhelming.

Then one day, I was 8 or 9 years old, I cried for the last time that way. And it was not something I wanted to do, either. It just happened, I guess, as a normal part of growing up.

Let me repeat, for emphasis: I strongly disagree with your definition.

Emotions are not unexplained rational thoughts, emotions are feelings. They reside in a different part of the brain. You seem to think a hunch is an emotion.


>And it was not something I wanted to do, either. It just happened, I guess, as a normal part of growing up.

That's just how you are programmed to be.


Sorry you feel that way.


FWIW, I used to feel the same when Firefox was trying to make their brand scream “we are hackers” and the UI had extra junk everywhere. But the latest versions are very minimal, so if you haven’t checked in a while you might want to give it another look. And it’s really fast now too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: