What about the pleasure of learning something new? Of accomplishing something for yourself? Solving a problem, or making a new thing? Or making yourself a better friend, partner, spouse, friend? Enjoying time with people you care about? Those things can give you a feeling of purpose and meaning.
I think the problem comes from expecting to find a larger purpose or meaning. Few of us will change the world significantly. Rid yourself of the idea that your life must have a greater purpose than satisfying yourself and the people you care about.
For example, too many people put a lot of effort into social media status in the form "likes" or "karma." What actual meaning does that have? What purpose does it serve, aside from attention-seeking and narcissism? When people spend their time on getting attention and status display their ego gets crushed when that attention doesn't come, or goes away. Wouldn't enjoying a walk on the beach give more pleasure, even if it serves no purpose beyond relaxation and contemplation?
You have to start with yourself, to find meaning and purpose for yourself in how you treat yourself, and then how you interact with other people. You can't start by hoping to find a greater purpose or meaning in life and feeling down and useless until that happens.
Hospitals are not storing the data on a harddrive in their basement so clearly this is a solvable problem. Here's a list of AWS services which can be used to store HIPAA data:
The biglaw firms I’m familiar with still store matter data exclusively on-prem. There’s a significant chunk of floor space in my office tower dedicated to running a law firm server farm for a satellite office.
Or legal order. If you're on-site or on-cloud and in the US then it might not matter since they can get your data anyway, but if you're in another country uploading data across borders can be a problem.
Some of the anecdotes seem completely fabricated to provide a convenient example (i.e. the guy’s wife was very observantly watching him play a round of golf with a colleague and caught him cheating?)
It's not unreasonable to think that the wife was playing with them, was waiting for the cheater (fictional or not) to swing, and happened to notice them move the ball.
I’ve spent time in New Haven and NYC and… not even close. Sure there’s good slices in New Haven, but there are also pizza deserts, and a lot of them. New Haven does compete with NYC on restaurant-style pizza though.
Yeah this is how this conversation typically goes "but we've got good pizza from X, Y, Z restaurant" — is not the same as just walking to the corner and paying $2 for a big cheese slice — the NY slice is essentially a street food, like street tacos are in some states along the mexican border
I really don’t understand why we as a society have found it acceptable to experiment like this on other sentient beings for our own advancement.
Why are these animal lives worth less than ours? Just because we are more “advanced”?
Killing animals for food is at least in line with the natural order (leaving aside factory farming). Running experiments like this is far beyond natural, and to me is quite bizarre and unethical.
A society that creates people who feel entitled to say they don't owe anything to those who made immense sacrifices for them (even though nobody chooses their relatives), is quite sad.
It's a society so individualistic that deprives one of the main pleasures of human life: making a difference in the lives of those you love.
Parents choose to have children; in my mind the responsibility flows in that direction, from parent to child. A child has no choice in the matter, why should they be saddled with responsibility?
I'll happily care for my mother as she ages - she's been a wonderful parent all my life. But I have friend whose parents basically just kept them alive to adulthood, kicked them out, and are now expecting years of "payback". No thanks.
True video capability would entail describing a scene as a prompt and getting a video in return. Not interpolating between a handful of images as is being done now (not to discredit those).
This will be a huge game changer when it occurs. Whether it be for deep fake videos, creating custom content, or making a new season of your favorite tv show that was cancelled too early. The possibilities are endless.
This is probably not in the near future (i.e. this year), but I doubt it is very far off.
I am much more interested in an intermediary step. I would love to be able to use a tool like this to create a comic book. This is after all just static artwork which the tool already creates quite beautifully.
What it would need to be able to do to get from here to there is understand some concepts. The first being "characters". On reddit there was beautiful image that recently won first place in an art contest and its quite frustrated some of the art community. When I was looking at it I thought it was awesome, but wondered at the ability to create another hundred or so images in that same 'world' that the created image was showing. I would want to do something like give it the prompt "tired old medieval knight with a mace and shield" and have it create the character then be able to name it "Tom" or something and feed it more prompts for that characters like "Tom is sitting in a forest brooding" and have it create the same exact character but in a different context.
That would be pretty game changing for opening up amature web comics to a large body of people who have ideas and tell stories but have no art skills to speak of - my stick characters are crooked :(
The result is bad though, for the same reason you can't generate video with it. Comic panels need to relate to each other; you can't simply make them out of random images. There aren't sufficient style controls to do that with current technology, even if Midjourney added in "textual inversion".
There is some work exploring that with Textual Inversion[1].
Another trick to approach this problems is specifying the random seed, this will cause the same image being generated by the same prompt without any randomness. When you now change the prompt you get an image that is very similar to the first one, but with the variation included. Somebody used that to age a woman across 100 years[2] with quite stunning results. Even works with gender or style changes.
I recently saw a Twitter thread from last year where someone made a comic book with AI generated backgrounds. The characters were added in later, but it stuck with me as a very cool future use case
Imagine how fun sitting at a terminal in vim editing a 100 line 'script' for a short movie and getting rapid feedback back. I'm so excited about the future.
The possibilities are endless. "Insert Willie Wonka, as Froto's love interest, and Willie should joint the major battles with UZI machine guns, and his dialog should be as if he is an inner-city gang member."
I thought we'd never get image generation this fast. Last year it was 30 minutes per image. The stable diffusion folks are planning for a 100mb release of the image generator in Q1 which for sure would be real time. I actually suspect you can get something like that incredibly fast (even though all intuition says otherwise).
What is referred to/defined as "interpolation" because as an outsider... isn't "Stable Diffusion interpolating text into images/frames/video" in a "literal" (maybe not technical) sense?
It's to be interpreted in the quasi-mathematical sense where you have images for frame A and frame B representing your data points. To interpolate between those frames, a flow of plausible images simulating the transition from A to B is generated.
Interpolation here meaning one smooth motion transition is all that is depicted. An entire episode of television requires things like cuts between scenes, possibly discontinuities like flashbacks, scenes that take place days, months, or even decades later, and characters should still look the same, but might be wearing different clothing, or grow a beard, or get really old but still have similar facial features and the same skin color. If one ages, they should all age about the same, unless it's a story with time travel or humanoid immortal characters that don't age.
I'm sure these types of capabilities will come at some point, but no current model can do it. It requires more than just projecting motion into a scene.
You could "hack it" by using a couple of other models as part of your pipeline. Similarly to how you have to use GAN after SD to "fix" faces sometimes.
You also could put a language model on top of your prompting system. So "gandolff kicking ass" gets translated into " Page XXX, Paragraph XX from LOTR "