I am looking forward to the event in Tokyo (https://hellotokyo.splashthat.com/) related to this news.
Rumor is that there may be a story saying it will open in Japan.
Sometimes I use this working space for events to be done at the company, but I am bothered by choosing the venue.
Criteria for selecting a venue
Capacity: Approx 100 people
Venue: where you can work in an atmosphere different from the office and where engineer events (presentation, code battle, etc ..) are possible
Since the atmosphere and condition of the venue of WEWORK is very interesting, does anyone who used WEWORK experience the above conditions?
I am using Hooked model to improve UX which I am doing. It is because it can not be said that improvement has been achieved unless the user keeps using it.
In the measurement, we set the action contents of the user (creation of account, preview of music, etc.) as tasks and set it as "Time on Task Rate", "Error Occurrence Rate", "Task Success Rate".
Measurement tools use Google Analytics and BI Tool / Tableau.
I think that this method is the best for now, but is there any other good way?
By writing out the idea that you are considering in your head, you can organize the inside of your head and look down on.
I think that Value Stream Map is good as a method to use for look down on, but how do everyone's put out ideas?
I was talking the other day that it would be convenient if AI could automatically extract pieces of music fragmentarily.
In Japan, I use the cut music for mobile phones / smartphone incoming calls, but do you use fragmentary music in other countries like the United States?
Push notification / e-mail may be an effective method at the beginning, but is it necessary to register notification when starting up the application without understanding the application?
If you register a notification after the application experience, I think that the probability of continuing is high.
In the United States, what is the need for speech recognition devices? At least in Japan and China, speech recognition technology does not reach practical level and needs are small.
There are two major use cases as I see them, and both are buoyed by the fact that these days, speech recognition by devices such as the Echo or Google Home is actually quite impressively good -- it can pretty consistently understand your words from across the room, while it's playing music, for example.
The first case is the "want" factor that others have mentioned on this thread. It's less that I "need" an Echo in my home, and more that it makes certain activities easier: I can get measurement conversions, set timers, and pause my movie while my hands are occupied cooking in the kitchen; I can get the weather forecast while putting on my shoes on the way out; and now, I can also answer calls and send/receive messages while my hands are occupied. It's less that I couldn't do that with my phone or computer and more that Echo-like devices make this more convenient.
The second case involves the fact that there is a comparatively small, but still significant portion of the population that cannot effectively use touch devices to do the things that the rest of us can easily do with our phones. The people who "need" this kind of device the most, in my opinion, are those suffering from paralysis causing them to be unable to use their hands or fingers with enough dexterity to operate a touch screen, or unable to use them at all. These devices are real life-changers for this group, since they can now control lights, their television and other entertainment options, and talk to their loved ones -- all activities that were difficult, expensive (requiring specialized devices), or impossible before.
In addition to this first group, there are also those who often have difficulty figuring out how to use computer/touch screen technology, even though they are physically able to -- the elderly are probably the first example that comes to mind. It's much, much easier to just set up an Echo Show and be able to "drop in" on your grandmother and chat, than it is to get her set up with an iPhone or a computer and teach her how to Skype or text.
So all in all, most people will get this for the "cool" factor and because it makes their lives a little easier, but some people will gain a huge benefit from it.
There isn't a need; it's more of a want. I have a few Echos and use them to control my whole home audio system.
"Alexa, play Kendrick Lamar on Spotify."
Just like that, I have music throughout the house and in the front and back yards. You can also get more specific with commands in order to limit where the music plays, what music service is polled, etc.
It is much easier than pulling out a device, navigating to the application, typing in a search string, selecting the artist, and clicking shuffle playback for all of the artist's songs.
I don't know what you have in mind, but I was pretty impressed that when I asked Alexa to play 'Scherzo for X-Wings' she pulled up the right track from the Force Awakens soundtrack without a problem. Try it before you knock it.
Downvoted but a very valid observation. I listen to a lot of Black Metal and as most of it isn't in English, it's hard for Siri to understand what I'm asking for, or to even know if I'm pronouncing it correctly.
The problem is not the availability of the music but Alexa's ability to understand names. If I tell an Echo to play, say, Zuntata, is it going to work? if I tell it japanese band names despite the overall language I speak being English or French, what will happen? Most voice assistants choke when you speak foreign-y names and can only understand them if you set the language of the device to the right language.
A voice assistant is only as useful as its ability to understand speech. If it can't tell anything from half of my music library then it's not worth much to me.
I literally asked my Google home this morning: "Hey Google, play some Peruvian psychedelic cumbia" and it found exactly what I wanted (using Spotify). For me, these home assistants are the best thing I have bought since my first smartphone.
Doesn't that violate streamed discovery and all that? You should just be thumbs up thumbs down and trust the streaming service to take your advice as it DJs to you. "Alexa play what you think I need to hear"
Sick burn. In your situation, you can pre-program playlists with your funky foreign beats and command Alexa to play the playlist instead. It adds a step, but it's painless after that.
> You can also get more specific with commands in order to limit where the music plays, what music service is polled, etc.
How do you have this set up? Does it integrate with some other home audio control system you have, or do you have an Echo plugged into each space and the music plays directly off them? Is there a way to play your own mp3 collection instead of streaming services?
It integrates with my Crestron control system, which controls the entire house. Alexa also integrates nicely with Sonos and Heos wireless speaker systems, letting you dictate music and location(s).
I didn't think I'd be a big user, nor did my wife. Since getting a Pixel phone, I find I use it quite a bit to conduct complex search queries as quickly as possible.
So much faster to speak it out now that their recognition is so damn accurate.
My wife fought me on it until I noticed she too was using Siri more, although the UX with Siri is not as great as Google's Assistant IMHO.
Sometimes I use this working space for events to be done at the company, but I am bothered by choosing the venue.
Criteria for selecting a venue Capacity: Approx 100 people Venue: where you can work in an atmosphere different from the office and where engineer events (presentation, code battle, etc ..) are possible
Since the atmosphere and condition of the venue of WEWORK is very interesting, does anyone who used WEWORK experience the above conditions?