> So if you’re a manager onboarding a new employee, Workspace saves you the time and effort involved in writing that first welcome email.
If you really can't be bothered to write the welcome email maybe just don't, rather than throwing some ML generated drivel as the first impression a new hire gets of your management. The direction this is going in is so focused on automating away the hassle of email, with the end state surely being that both sides of an email exchange are ML models churning away at each other.
Can we not pretend people weren't already googling sample emails for this?
It's just more efficient now to grab the same starter email that you then customize. Admittedly, google shouldn't be pretending otherwise, either. But what they also did was make it so you can get the starter email without having to see their ads.
Edited to add: I think a better argument on this could he made about the "draft a thank you note to the team" example they give in the video. That's more the type of thing that should be personal and probably not from a starter template, googled or AI-generated. https://www.youtube.com/watch?v=6DaJVZBXETE But even here, the reality is people were googling for these starter emails already.
Piles of impersonal junk created by humans at work. People lope through life, especially work life, a lot. If you are doing repetitive welcome emails they are almost guaranteed to be impersonal, I don't think anyone could rinse and repeat personalized emails for hundreds of people without burning out.
Just because lots of people do it, doesn't mean it has to continue to be done.
If you're growing so fast you're sending hundreds of emails in a short time span, then maybe you should replace a pseudo-personal ai-written "welcome email" for new hires with something that scales better and feels personal.
"Feels personal" is the problem that AI solves, isn't it? I mean there is a difference between "feels personal" and "is authentically personal". AI can definitely do the "feels".
Besides that, it's not about how rapid you need to send these emails out. It's about how often it's done. If you had to write birthday cards to people you barely know every day, maybe it's only one a day, it's going to get rinse and repeat very quickly. All the ways of saying "have a great day" are going to get exhausted and whoever is writing it is going to feel insincere in a short amount of time. So why not just have the AI write this sort of thing?
Slack bots welcome you to slack channels, when your email is set up there is a default welcome message. No one really cares, they just want to feel welcomed by something to break the ice. I don't think anyone really gives a stuff about how the message is written as long as it's honest. If I joined a company and the welcome email was signed off with "You friendly welcome bot", I'd be happier than some sort of boilerplate cut and paste message signed by a real person pretending they aren't writing the same thing for the 5th year in their job.
I would honestly prefer blunt, straightforward, de-GPTified communication style. I hate the idea of wasting my time on reading substance-free filler, regardless of whether it was written by an AI or not.
The best ones know when to put in effort to show they care for another human being, and know when to automate a task where that doesn't matter. Welcoming somebody to a new job is the former.
Simple answer to this: "Hi, I am company X's AI, I'd like to get to know you because we will be working together here and there <welcome message>."
Basically endource the AI and be clear that it is the thing welcoming you. If you hide the fact an AI is pumping out an automated email it just comes across as if the business thinks of you as a drone.
If the major contributor of an e-mail (or code) is an AI then it should be attributed to the AI in the signature/attribution fields. Otherwise the work is plagiarized.
Fortunately for middle and line managers, the vaunted 1:1 hasn't been automated yet, nor does it scale, so the technology doesn't allow managers to have more than, say, 60 direct report effectively. Meeting with each of them for 1 hour every other week only leaves 20 hours every other week for work that isn't a 1:1. I think middle management will be safe for a little while longer.
Less frequent than that and people start forgetting the gripes and inconveniences they need help with.
Managers are supposed to work on the system, at least in theory. 1on1’s are for the manager like server metrics are for the SRE. An early warning system of future problems. Ideally you can fix issues before the problem blows up into an alarm.
A professional midwestern 1-on-1 starts with the assumption that no work was performed since the last one and it falls on the underling to prove otherwise during this time.
> 1on1’s are for the manager like server metrics are for the SRE.
I'm not sure that is accurate. As a manager, I have a bunch of production metrics on my people. I can tell who's getting work done, who isn't, who has high defect rates in their work, who makes fewer defects, who is positive ROI, and who is not paying their keep. Where the one on ones are useful is when they are informed by the metrics, and those conversations don't take anywhere near one hour every two weeks. They take 10 minutes - 15 minutes, max.
Ok sure but there’s more to people than raw output numbers. Where do you get info on how they’re doing as people? Whether they’re pulling 12 hour days and that’s why the output looks good or they’re happy as a clam living their best life and that’s why output is great?
Sometimes you gotta kick people to go take a vacation lest their mind/body takes an unplanned vacation for them. Even if the raw numbers are looking great. You’d prefer to have a planned outage wouldn’t you?
And hey, if you don’t need a full hour, it’s okay to make 1on1s shorter! Few people will complain about getting more time back in their lives.
But be warned: The good stuff usually comes out 5min before the end of a meeting when an employee says ”Oh yeah one more thing that’s been on my mind …”. It takes time to relax into that area of concern that’s a little scary to bring up.
Longtime manager of technical people here and can confirm the "oh yeah one more thing" is where so much of the really good stuff comes up.
No matter how explicitly you state it, ask for it, overcommunicate that you're approachable even if seemingly busy... people are averse to the impression that they're interrupting their management even when they have something important to discuss. When you create the space and have even that quick 10 to 15 minute chat every couple of weeks, you're creating space for your people to have your attention without worrying about getting your attention. And people often take advantage of that path of least resistance.
So many of the most important conversations to have with the team are less about their output and more about things they've got going on or need to know _despite_ their output. All the standard sprint metrics are just one piece of the puzzle.
I agree with you that there's more to people than numbers. But it's a lot easier to surface that when you have a frame of reference and some consistency in how you measure:
"so, last week you had a -23.5% contribution... You usually are on the positive side, so what's up?" and the employee tells me that their lead stuck them on a project where they hand no experience, and didn't have time to learn how to do x or y before getting thrown at a ticket... or, "I'm having some issues with <personal>..."
Stuff as a manager you can sometimes help a lot with... and other times, well, you can be supportive. Just because numbers are involved does not mean we have to be cold and heartless... At the same time, I've found managing without numbers to just lead to cliqe-y political culture where people's ability to schmooze is rewarded more than their contribution to the team and that drives away the most effective contributors.
> And hey, if you don’t need a full hour, it’s okay to make 1on1s shorter!
Agree... I think I just realized that the exception should be a long 1 on 1. Just because I schedule 15 minutes doesn't preclude taking more time if there's something in the 1 on 1 that should take longer.
> I can tell who's getting work done, who isn't, who has high defect rates in their work, who makes fewer defects, who is positive ROI, and who is not paying their keep
I've been doing engineering management for ten years and I've never seen a set of metrics that can tell you these things.
What software engineering metrics tell you who has positive ROI?
If you’re only talking to your directs for 10-15 minutes every two weeks you are neglecting your duties as a manager. The standard is 30 minutes 1x week with every direct report.
This is an interesting take, and one that directly contradicts most leadership books. Are you aware of that, and how did you come upon this management style?
And the good middle manager (yes, they exist ! and yes, they are useful !) thinks "the not very useful part of my job can be automated, yay more time for the actually interesting/useful thing !"
Realistically, the middle manager thinks, "the part of my job where it looks like I'm doing something can be automated, yay more time for 3-4 day weekends!"
The middle manager thinks, "the time and effort I spend on my job can be automated, yay!", never considering the second order effects.
This is why when a cashier at the supermarket tells me that instead of waiting in line I can use the self check-out lanes, I tell her "I don't want to help a robot take away your job."
Usually they don't understand. But sometimes you can see them start to think about what's happening.
The cashier would rather you leave them in peace than hit them with the same tiresome self-congratulatory remark they hear day in day out. Please use the self checkouts instead.
You mean to tell me that my joke that I should get the item for free when it doesn't ring up; you mean to tell me I'm not hilarious and that the cashier's heard it before?
Please don't use the checkout, why should corporations worth billions and tens of billions (I guess that's what Walmart is worth) get to outsource some of their work onto their clients?
As a client, using the self-checkout is (usually) a choice I get to make. I choose to use the self-checkout because it is often faster than waiting in a line. I am thankful for the opportunity to put in my own effort in exchange for faster completion and reduced prices.
I remember reading a study somewhere that the self-checkout isn't actually faster. It feels faster because you're busy instead of passively standing in line.
That depends on the implementation. E.g. Sam's Club lets you scan items as you put them into your cart and just walk past the registers.
In the local grocery store at rush hour, cashier lines are long and self-checkout tends to be quicker for buying 3 items. Taking a whole cart into self-checkout is slower than a cashier, because the self-checkout has like a 5% chance per item of going into "assistance needed" mode.
What's important to me is my time. I don't care whether it's spent scanning things myself or waiting in line for someone else to scan them. I also don't care about punishing Walmart, so whether they "get to" do something is totally uninteresting to me.
I just want to pay for my items and get out of the store, in whatever way works.
This is nearly getting to the point of "don't use shovel's, think of all the ditch diggers we'll put out of work"
I find places with scan as you shop apps way better than waiting in a checkout line. I get a running total of what I have in the cart, and when I'm done, I just leave without a line.
I saw on tiktok (yeah I know) where a lawyer says they never use it because you can be liable for stealing for anything you forgot scan. I never use it cause occasionally the cashier will let you know that there is a coupon for something you are purchasing.
At least 9 times out of 10, the self-checkout is a better experience for me. I can scan things at my own pace, pack them in my bags the way I like, and listen to my D&D podcast rather than having to interact with a stranger, unless the system acts up.
As pxl97 said, I preferred being able to scan things as I shopped and not have to stop at the checkout except to pay, but the one place around here that had that discontinued it a few months ago.
We need to push back hard against the idea that automation is "stealing" people's jobs; the promise of automation has always been that of more money for less work. We need to remind everyone of that, and reverse the trend of automation simply putting more and more money in the pockets of the already-wealthy at everyone else's expense.
I do want a robot to take their job. I also want us to ensure that everyone does have a fulfilling, non-robotic job available to them. I think those cashiers would happier, say, painting a mural on the wall of the grocery store.
Of course, this gets into UBI territory and making sure that as a society we are not just replacing menial jobs to then abandon the people who would have had those jobs.
If the content of your email could have been generated using an AI trained on publicly available information, it necessarily must not include anything private.
Which means you are spam messaging your new hires with messages that contain zero actual information!
Noise is a word that does not even properly capture the uselessness of such communications. I swear, many humans seem to just want filler text to put places, and they just keep shoving it there.
Anyone read blindsight by Peter Watts? Spoiler alert: an entire science fiction book written around the fact that sending useless information to hyperintelligent beings could be interpreted as an intentional distraction and therefore a threat.
Consider the relationship between a writer and an editor in a newspaper setting. The writer creates the content, while the editor reviews and refines it to ensure it aligns with the publication's standards and style. The editor's input can help improve the overall quality of the piece without taking away the writer's original ideas and skills.
In a similar manner, using AI tools like GPT-3 or GPT-4 can be likened to having an AI editor that assists you in the writing process. By working together with the AI, you can maintain your writing skills while also benefiting from the AI's ability to generate content quickly and suggest new ideas.
Just as a writer doesn't lose their skill by working with a human editor, collaborating with an AI editor can enhance your writing without diminishing your abilities. The key is to find the right balance between using AI for assistance and retaining your personal writing skills, just as a writer and editor strike a balance in their collaboration at a newspaper.
This comment was written in collaboration with ChatGPT.
That example is so bad, I'm not entirely sure some writer having a bad day didn't put it in as a cynical joke.
Or are they working in an environment so steeped in insincerity that they don't even know what being genuine is, or that one of the most important things for a manager to do with things like a "welcome" is to be setting the tone.
Nothing says welcome, and bring your genuine self, and trust colleagues, and feel intellectual safety, and invest in being a part of the team that's investing in you... than a new hire's manager can't even be bothered to say welcome themselves, and offloads it to a chatbot button instead.
OTOH, this fits the sociopathic culture you'd expect, when you look at their hazing rituals, compensation levels, and promotion criteria. It just doesn't fit the culture some of us techies envisioned when that company was started.
Well, since you never have to go into the office, you and the HR person can just hang out on your couches and let the AI assistants talk to each other. It's like the Jetsons, except hopeless and stoned.
Are you a manager whose new hire just sent a reply to your AI generated welcome email? Don’t have time to read it? Workspace auto-hides the text and just shows you a simple three-word summary.
For just $3.99 per month, we can reply also to all the subsequent emails from your manager within 30 minutes! Just give us access to your private inbox and the emails will include your own personal style.
This is hilarious. The point of a task like a welcome email is that they _take effort_.
I can’t help but think that an example like this shows how far Google has become like Microsoft in the 2000s. A bunch of rich, middle aged managers that have spent too long in the alphabet soup.
At the end of the day, you're still responsible for reading the email before you send it. I don't think it matters how the email was written. I think the final content matters more. There's many ways to prompt an AI, both carelessly and with extreme attention to detail.
For example, I typed this message. But I'm sure with careful massaging, I could get an AI to write the same thing, word-for-word. The laziness you're referring to probably happens even if the manager is typing the email out themselves.
What was the sci-fi book with "silvers" that were basically AIs that would handle administrata, set appointments and negotiate with each other for their humans?
Not exactly. These are your feelings because you know of A.I. and would rather not receive a canned message BUT not all people are the same. There are many who would appreciate that email and welcome it. Ignorance is not bliss.
You don't need to imagine. It's enough to have a LinkedIn to receive unprocessed templates like this where I would guess their scraper failed or something. On the contrary I don't think an AI would make the same mistake as easily.
Onboarding email should be just a template. Or, even better, a small template with a link to bigger template. As a manager, you should demonstrate your empathy in a welcome 1:1 meeting.
In an ideal world, it would be lovely if managers wrote copperplate welcome notes on the finest perfumed paper, but they don't. They copy and paste a template or use a tool like Text Expander. This just makes the process slightly more convenient.
We are going to hear a lot more of this kind of behavior from super talented and experienced people.
But for the less experienced, AI is a starting point. And depending on the kind of person you are, the kind of upbringing you have, and the kind of education you got, you will be motivated to edit and improve what the AI gives you.
The other thing I think this will help improve upon are those idiots and scumbags who give commands that are not very well thought out and then yell at talented people for not following instruction because they filled in the blanks with their own ideas. Giving it to an AI will show ridiculous their commands were. (sorry, end rant)
Well, that google post probably was written with the help of AI and itself looks like it could benefit from keypoints extraction service, it's full of corporate mumbojumbo with occasional "Select developers can access", "stay tuned for our waitlist soon", "to testers in the coming weeks.", "more to come in the weeks and months ahead."
In general, very low energy and conservative compared to bold MS moves
It's more than that. The initial blurbs don't contain enough information, but with context you can make solid guesses about what should go in.
So it's
* small explanation
* initial proposal of what you probably want
* fixes and approval
Not really any different than asking a person to do it for you.
You don't tell your car/mapping software the exact route you want, you tell it where you want to go and then review the proposed plan, and that's with a very restricted setup (exact destination, limited routes).
Edit
More importantly, short messages that convey everything you want with the right tone are hard. Taking my mashed thoughts and converting them to easier to understand prose is a benefit even if it's longer.
Just like a larger function can be better than a code-golfed one. It's longer but it doesn't take longer to read.
It reminded me of an old article [1] about Howard Schultz where his reply of "On it." was praised by another CEO because it demonstrated that he was much more focused on getting shit done rather than overcommunicating or over-delegating.
I just dug it up:
> Friends and colleagues agree that he is as fanatical as ever about Starbucks. Millard Drexler, the chief executive of J. Crew, recently e-mailed Mr. Schultz to complain that the coffee lids at a Starbucks on Astor Place in Manhattan kept spilling coffee on his shirt. Mr. Schultz’s reply: “On it.”
> Mr. Drexler, who has a habit of e-mailing C.E.O.’s with complaints, says: “I can give you many more examples when they say, ‘I’ll send this to a research department or a gatekeeper.’ ” But, he says of Starbucks, “to have that kind of quality control they have around the world is pretty extraordinary.”
Google is also effectively giving up on Search if they are willing to help people generate more SEO spam at scale by directly providing AI text generation in Docs. OpenAI/Microsoft shook them up so hard that they are doing stuff that is most likely self-destructive long term
AI-generated SEO spam is here regardless of what Google does. But they don't also want everyone to stop using Google Docs because it lacks features that are definitely coming to Microsoft Word and every other competitor.
Malls are already designed in a labyrinthine way so you pass by as many stores as possible.
Self driving cars might have set destinations that are cheaper to visit, or have free rides that are ad-supported, or recommend stops along your route.
And how those cars were built? Electric cars generate more tons of CO2 to make than conventional combustion ones. But it's expected to pay off as long as the electricity is generated cleanly.
I guess that depends if it's _your_ self-driving car or Google/Uber/Teslas self-driving car that they're letting you use. It might even be so that the physical car is yours, but the self-driving software is licensed and controlled by a third party.
I wonder if they will let the AI on the receiving end access the original text that the sender wrote to get better results. Presumably Google will store the pre-BS message.
The video is very focused on first order effects. A middle manager automates all their work by writing a few sentences; they produce a "brief" by giving a one sentence input to the AI, doubtless imagining that other humans will carefully read their brief, quietly nodding to themselves and considering every sentence, with passing thoughts about how competent the middle manager is. In truth, the middle manager will soon be replaced by an AI, and the lovely looking briefs produced by the AI will only be consumed by other AIs.
Another part shows the AI creating personalized marketing messages. For a brief time they will work, and then every company will be producing them by the millions, and then customers will stop reading emails at all -- again, the AIs will be writing messages consumed by other AIs.
We're pretty close to outsourcing the customers' decision-making to AI already. Of course, when the customers all lose their jobs, I'm not sure who's going to be buying anything...
My view of the future has completely collapsed over the last 6mos-1year. I desperately don't want to live in a world where everything about everyone is fake and augmented. Where everything you look at and interact with is just AI generated or AI controlled.
I can't see how AI isn't going to make the problem of falling social interaction 1000x worse.
You can still walk down to the creamery and drink a milkshake. There are plenty of flesh and blood people there to talk to. The birds are chirping and it's a sunny day.
There are some bad/weird stuff happening on parts of social media, but you can literally just turn off your phone and go outside.
Indeed, even with content... Somebody with ChatGPT and Grammarly can produce a humongous amount of content these days, and it will read really well to the average Joe.
I'd like to be not so sure. I could see a mass-response of revulsion, and a cultural move towards concise, direct, non-bullshit communication.
I don't want to think too hard about the alternative, of large groups of people not even bothering to ever try to learn to write. Probably become better megacorp output consumers.
This video probably had functionality representing 10% of the latest YC batch.
Not mocking. This is a really hard space. You can start a whole company and build a product and overnight goog/msft/openai can tweak an existing product or API to make you obsolete. I'm sure some folks will figure out moats eventually... Always rooting for the upstarts over the incumbents.
Yeah turns out launching a UI wrapper around a clever prompt doesn't give you much leverage against the big players.
Seems like the opportunity is applying LLMs in the background to accomplish previously difficult tasks, as opposed to just building text generator tools.
Yeah but let's wait and see if it actually ends up being any good. Also, you could use a similar argument to say that Feedly has no reason to exist while Google Reader was better, but we all know how that ended.
I really hope the big take away from such developments is that if an AI can rattle out fancy-sounding reports and slide decks in seconds then there was never any value in such content to begin with. It was and still is corporate busy work. Maybe instead of using a text model to convert 5 bullet points into a 3-page report, we should just share the bullet points? But hey they also sell you an AI that summarizes this report back into bullet points so you don't have to read it.
As someone who spends some amount of time at work doing stuff like converting 5 bullet points to a 3 page powerpoint presentation, I think everyone already knows it's corporate busy work. The point of it is not that anyone is going to read that 3 page report, it's to show that you've done work and produced some output ("demonstrating toil"). As long as they're insisting that it get done and are willing to pay for it to get done, I'm more than happy to ship that toil off to an AI. If it means I waste 2 minutes writing a good prompt rather than waste 30 minutes copying and pasting content, adding filler narrative, and picking the right font and color, that's more time back to do other things besides this performance art an employer expects of me.
Genuinely curious: what makes you think your employer will let you ship that toil off to an AI instead of just letting you go (or downsizing your team) and shipping off the toil themselves?
The acceptance that you've been doing corporate busywork as your career seems at odds with your confidence that you'll be the one to outsource that busywork to an AI. Doesn't the rapidly increasing capability of automated systems to pull off this busywork imply that careers built on doing that busywork are increasingly at risk? Am I missing something?
It could always happen, of course, especially now that we're in a bear market.
Most of us techies, at some point in our career, have managed to automate some boring or tedious part of our job. We don't tend to get let go for doing this, it just frees us up to do other, possibly higher-value things. Or, if you are less lucky, your employer just comes up with other not-yet-automated tedious things for you to do to demonstrate toil.
Our days are all filled with a combination of useless, low-value tasks and useful, high-value tasks. I'm personally a believer of finding the low-value tasks and automating them, so that you can spend more of your time on the high-value ones. I've worked with people who have the opposite view: Spend your time visibly toiling on whatever is easiest, because most employers can't tell the difference between spinning your wheels being busy and providing actual value.
So humans are going to enter bullet points, and the AI will generate prose. Then the human on the other end will use an AI to extract bullet points from the prose.
I hope we realize how insanely stupid this is soon.
Can't wait for Google to add ChatGPT-like features to Chrome and their suite of Web Apps so I can finally ditch Microsoft Edge with it's ChatGPT integration. The UI of Edge is god awful. It has the same issue as Explorer in Windows where Microsoft thinks showing you all UI buttons and other UI elements at once is a great idea. Microsoft can't make compelling consumer facing products to save their life.
I actually liked Edge and was happy to move away from Chrome. Then they started injecting their content into web pages. And then some shopping feature. I hate seeing the "You can now add extensions from the Chrome..." banner when I'm logged in the Chrome web store. It won't go away even after I closed it multiple times.
It sounds like it's Bing Chat, not ChatGPT. They are fairly different – ChatGPT being fairly mature but heavyweight, and Bing Chat being truly unhinged.
Have you actually used Bing Chat? While there are certainly some very funny conversations floating around where it goes off the rails, that isn't the normal behavior: It's pretty good in practice.
They show an example of someone typing "I'm on it!" and then converting that into a page long email. They also show the ability to summarize, which of course you'd use on a page long email whose real content is nothing but "I'm on it."
That verbose email is going to end up a weird machine interchange format (which only looks like formal writing for human consumption), just to send a message like "I'm on it."
> Starting today, we’re making an efficient model available in terms of size and capabilities,
Excellent!
goes through the terrible UI of google cloud console
Nothing there. I see it's not actually today it's a sometime soon with some people in a private preview. Any info on what model size they're actually releasing?
Time to try the demo they showed in google docs.
> Now, we’re excited to take the next step and bring a limited set of trusted testers
Oh.
It's good that they're actually planning to ship stuff, but when? I've seen private previews sit around certainly for 6+ months, probably a lot longer.
I'm looking forward to seeing some of this make its way to general release, but I'd guess that this post is in the vein of their hurried initial press response to Chat GPT.
It's quite something. I wish they'd just be more up front about it because then I'd probably be more excited. These changes are incredible, and it's a shame my general feeling is disappointment from trying and failing to use something.
They do this all the time launching something in the US but not mentioning the limits leaving me struggling to get something to work.
It doesn't help that their approach to ai is often "don't worry a magic thing will tell when you want it and will make it appear" and I don't know if I'm not making the right incantations or it's broken or it's not available.
I wish these would all come with a "notify me when available" button at the very top.
They're going to open up PaLM & an image generation model (imagen?) through Google Cloud. It looks incredibly easy to finetune PaLM in their environment:
When you have nothing at stake, like openAI or bing, its easy to just throw things at the wall and see what sticks. There is nothing else on the wall you could potentially knock off.
I don't see why Google is in a materially different situation with AI than Microsoft is. It's not like anyone is suggesting that Google should have shut down their search in order to launch something like this, if it fails then it joins the long line of dead products.
> Earlier today, we announced the PaLM API, a new developer offering that makes it easy and safe to experiment with Google’s large language models. Alongside the API, we’re releasing MakerSuite, a tool that lets developers start prototyping quickly and easily. We’ll be making these tools available to select developers through a Private Preview, and stay tuned for our waitlist soon.
The mail-part is ridiculous, reminiscent of the latest episode of south park, which illustrates the impact of teenagers using ChatGPT to respond to their girlfriends messages.
Nature goes the path of least resistance, and nowhere is this more obvious than in human interactions. If your signal consists of three words, write three words instead of generating blobs of decorative noise masquerading as content, that serves no purpose but to be decoded by the same ai, but client-side.
What I find regrettable is how GCP's startup program seems to be entirely geared towards VC-fuelled startups for the higher tiers.
Compare with Microsoft for Startups, which isn't seen as hip and edgy but can get you like $5k+$25k in Azure credits (and some OpenAI credits) without having to go the VC route.
Seems more entrepreneurial to me. Venture-backed is just not the right path for everyone.
It’s worth comparing these to how a human would respond to the same request. “I’d like to write a job description” would get “okay, what will they be doing?” or something. Instead we just get a best attempt at making shit up.
It's interesting that the example for using an AI to write documents is a job description. It is uniquely suited to the AI because in real life, we've been copy-pasta-ing them for decades. It's also a place where there is some level of legal peril - a job description with material inaccuracies or discriminatory language creates lawsuits.
They just keep putting out press releases talking about private betas and tests with "selected trusted partners" but nothing the regular consumer can play with.
If that's the pace at which they're planning to compete with Microsoft/OpenAI, they already lost.
Such an uninspired use of the term. Google is so used to spreading SEO , now they want to choke every office with well-written noise. This is using AI to make me waste more time, i don't want that
Instead i want
- Intuitive interfaces to respond to the email with a few clicks and a short laconic email (I don't want to waste other people's time)
- Specialized text generators for legal purposes, which create provably verifiable and consistent documents
Prompting an AI for 5 seconds to write a bunch of paragraphs that your human counterpart has to then read and think through is incredibly disrespectful of their time.
AI assistants that summarize and enrich information for you, fine. That actually saves time. But this feeding of generative output to humans is way off the mark. It feels like the product people are not sitting down to think what they're doing and just riding the GPT hype wave.
But the recipient doesn't have to read and think through it, they can just have the same AI summarise the email. Google could even store the original prompt and use it as context for the summariser.
I cannot stand the prosaic, repetitive, boringly formal nature of GPT/etc output. I also am tired of the randomly messy-surreal, illogical images generated by DallE/etc output.
HR/PR approved corporate communications are already crappy. From a practical viewpoint, I don’t want to wade through pages of bullshit boilerplate and illogical images. The future is bleak and boring…
“So if you’re a manager onboarding a new employee, Workspace saves you the time and effort involved in writing that first welcome email.”
Some great announcements, could have given more pertinent examples. You save your time elsewhere using AI tools, so you spend time with your valuable newcomer.
Yeah cool, I need to write Serial letters using my own fonts. Can't do that with Google Workspace. As an enterprise customer, that is. I have no idea who leads product at Google but they need to get their shit together. I don't need some ChatGPT BS to write my mails for me.
It is either intentional or awfully coincidental that that Google would announce this two days before Microsoft's AI event: https://news.microsoft.com/reinventing-productivity/, which will probably discuss similar product features in Office 365.
Speaking of copyright, remember that content produced by an algorithm cannot be copyrighted. We've already seen this play out with stable diffusion. Also, the government copyright office is surely unprepared for the coming tsunami.
This is a very grey area. A prompt, for example, could be copyright. An image you create by drawing a base layer and having the AI fill the details in could be copyright as well.
The government copyright office doesn't really decide these things. Courts do.
I am pretty sure it exists, but not for you. "Select developers can access the PaLM API ... in Private Preview today, and stay tuned for our waitlist soon".
I swear, if Google says "select developers" or "trusted testers" one more time, they'll have lost all credibility in launching something so important to their business and it will totally be on them.
It’s notable that at multiple points they sell this service as safe
- “easy and safe”
- “all supported by robust safety tools”
when the only things users really care about are quality, latency, and cost. Not sure where their head is at but unless they are oriented around these three things they’re never going to catch up.
> Not sure where their head is at but unless they are oriented around these three things they’re never going to catch up.
But ChatGPT is doing exactly the same. There's more energy spent trying to manipulate the output to be politically correct than actual energy spent making the AI more powerful.
That seems reasonable - if you're trying to create a market for a publicly-available chat-AI, and making a demo freely available for PR/advertising purposes, then the public perception of that demo is critical to your business.
If they had just put up the same model without that attempt to make GPT wear a politically-correct mask, they'd just get another Microsoft Tay event, and only get a buttload of negative publicity for their investment.
For an internal application used for specific practical tasks, sure, focus on making it more powerful; but if you want a demo for general public as a 'pre-sales advertisement'? You bet that political correctness has to be there.
If ChatGPT is useful people are going to use it anyway, even if there is negative publicity because when you bait it with "Is X race superior to Y race?" it gives you a non PC answer.
ChatGPT isn't the only game in town, and corporate purchasing departments (which are the actual target audience) will then rather buy a similar service from Google or someone else, even if it's slightly worse, rather than have to explain themselves (and risk their personal position) to why they're using the toxic taboo thing that's reviled by the press.
I get what you're saying here and agree, but there is no world in which openai exists as a continuing concern without making their product overly PC. Remember that their model was trained on a broad swath of the Internet, warts and all. They can either spend the effort making it aligned with the current thing up front, or they can rush to do it after the model outputs something spicy and they have to deal with the resulting PR shitstorm, outrage baiting hit pieces, and otherwise distractions.
If you really can't be bothered to write the welcome email maybe just don't, rather than throwing some ML generated drivel as the first impression a new hire gets of your management. The direction this is going in is so focused on automating away the hassle of email, with the end state surely being that both sides of an email exchange are ML models churning away at each other.