Hacker News new | past | comments | ask | show | jobs | submit login
Kinect for Windows SDK (Silverlight Required) (microsoft.com)
108 points by th0ma5 on June 16, 2011 | hide | past | favorite | 63 comments



This is unreal-- from the FAQ:

Q:I know that other drivers and development software for Kinect are available on the Web. Can I use the Kinect sensor device with these other drivers or software instead of the SDK Beta?

A: No. Use of the Kinect sensor device is subject to its own warranty and software license agreement that allow you to use it solely in connection with an Xbox 360 or Xbox 360 S console. Only Microsoft can grant you the additional rights that you need to use the Kinect sensor device with a personal computer. Microsoft grants these additional rights in the SDK Beta license, but only for uses of the Kinect sensor device in connection with the SDK Beta. If you use the Kinect sensor with a platform other than Xbox 360, Xbox 360 S, or Windows (with the SDK Beta), you void the warranty you received when you purchased the Kinect sensor device.


Can't you just be happy for a minute that they even released a public SDK?


The terms on the beta are pretty restrictive. It's better than nothing, but basically it's non-profit, educational stuff only. A little discouraging to start working with, especially when I couldn't find any concrete info about future commercial SDK.


It's a beta for something they never even meant to make a pre-alpha of. They probably just don't want to be held responsible when a commercial user makes a fit over lack of support for bugs he found in beta software


I got mine second-hand. Somehow, the warranty and all other license materials weren't included.

I can has open source now?


From the demos shown on Channel 9, it seems the SDK comes with skeletal tracking. This would be big!!

Edit: I just looked at the Skeletal viewer sample. Looks like they do have skeleton tracking in the SDK. Woot!

Edit 2: They also have sound localization from the Kinect's microphones!!

I'm not sure about others, but I I did not expect either features to be in the initial SDK. This might explain why it took them so long (they said it would be released in Spring). They seem to have gone all-out!


OpenNI comes with skeleton tracking as well!


Kinect for Windows is, IMHO, Microsoft's only chance of really getting back in the game. It is an Apple level sea change if they can capitalize on it.

I mean lets be real here. Touch is a great way to interact with computing devices but it screws up the screen. Until someone invents a smudge proof surface that will always be the case. Beyond that usability studies show touch screens on PCs just don't work. People's shoulders start to hurt very quickly.

Kinect solves that. It's touch without touching. It can be done without having to reach over the keyboard. It IS the future if Microsoft can capitalize on it.


Touch for mobile, however, avoids the screen-mess problem, as cleaning the screen is as simple as putting it in your pocket/holster (assuming you have a oleophobic screen or screen-protector) or wiping across your sleeve/pants.


Remember that apple R&D, that found that mousing was quicker than keyboarding (because its low cognitive load doesn't interrupt your train of thought - it just seems quicker because you have amnesia when immersed in working out the right keys... [not applicable to expert fingers])?

Perhaps touching the screen instead of acquiring the mouse would be quicker and even lower cognitive load. This wouldn't be for every interaction, but as an adjunct to the keyboard - as the mouse is.

Not sure if true though - my mouse is at the same elevation as the keyboard, whereas the screen requires raising a hand about a foot. The cognitive load is lower though - people often instinctively touch the screen (if they haven't been screen-trained), especially when looking at someone else's screen. Perhaps that's a key? Collaborative use: a mouse is per person, but several can look and touch a screen.


Do you have a link to that Apple study? I find that very exciting and never heard about it before. Though it would mean, that me and many other geeks are plain wrong about their preferred input method.

You might be interested in DiamondTouch [1] which allows multiple users on one touch table. I tried it out at a presentation at the MediaLab in Nov 2010. The trick, if I remember correct, is that the user sits on a special chair. The chair, then detects an electric connection between the user and the table.

[1] http://en.wikipedia.org/wiki/DiamondTouch


I believe he's referring to this:

http://news.ycombinator.com/item?id=2657135


Yes, that's it, thanks. For the GP, please note I said: [not applicable to expert fingers]


Interesting, I seem to remember a paper a decade or so ago (can't find it right now), where Apple concluded that forcing mouse use is designed to slow the user down. To give them more time for thought.


How does it solve that? Walk me through that, my imagination must be lacking.

You seem to be imagining a trackpad that doesn’t have to be touched, that can consequently track a much larger area and that can track 3D space. That’s pretty sweet, I guess, if it works reliably and is as precise as current trackpads but it doesn’t seem all that amazing.


I think that might work ... a holographic keyboard and trackpad that hangs in mid air and on any surface pointed at it. Try doing typing motions in mid air or with your arms on arm rest with your hands drooping down end of rest(arm rest scenario would be a split keyboard). The latter is more comfortable then what I am doing now - wrists and hands on metallic/plastic laptop and keyboard.

Also, remember Kinect has a fairly solid voice controlled engine. I know many people who use the Android talk to text function regularly. I wouldnt mind using similar tech to execute searching, reading and communicating via text on the web.


i don't think it solves the "starts to hurt" problem. it's not the touch that causes that, it's holding your arm up.


Right but if you don't have to reach you don't have to use your shoulder to hold up your arm. Just your elbow to move your hand up above the keyboard so the Kinect Sensor can see it.


So what's the reason for a kinect sensor then? I'd rather hit a key on my keyboard (which gives me instant physical feedback) than hovering my hand in the air and trying to grab a file or something.


It depends on how much you use it for. For typing, it's not so good. As a browser back button, it's comparable to mouse gestures in effort.


Kinect is ahead on touchless movement and position, but is far behind on "click".


Smudges just aren't a big deal if the screen has a backlight.


I think that moving a finger to do an action is going to trump needing to move your whole arm or do jumping jacks for the same action. Especially since it would be fairly embarrassing for probably a whole generation of people before it became common place. But it might justify some of the more active users getting offices.

What you're suggesting I imagine is points at an item but doesn't touch it. Maybe they coordinate themselves with a virtual reflection on the screen or something so they needn't physically touch it?

The background technology is indeed complex, but that doesn't really matter much. The biggest stopper is the fact that you are still putting and abstraction between yourself and the object you're manipulating.

Touching something and directly manipulating (touch screens) is a very close coupling with little abstraction. But having to point, but coordinate your "reflection", or point but don't touch is a stretch.


I'm not sure the abstraction bothers that many people. If every time you lifted your hand a virtual hand appeared on the screen I think people would get by. I certainly think Kinect is preferable to the Apple's current desktop solution (the touchpad touch interface)

Part of my logic here is that all of this is a placeholder.

EVENTUALLY someone will invent a smudge proof technology with tactile elements and we'll probably evolve to the point where are screens are angled (think Star Trek consoles) so the reaching isn't so much an issue.

Until that comes along there will be abstractions and I think Kinect has a chance at being the best of those abstractions.

(In the above I assume you meant a mouse by "moving your finger". If you meant an iPad like interaction I'd agree it is better but I think it is unfeasible for desktop interaction)


Fun facts from the Kinect SDK FAQ, at http://research.microsoft.com/en-us/um/redmond/projects/kine...

- You can't copy the runtimes with your application. Everyone has to download the SDK

- All non SDK software (OpenKinect, OpenNI + OK, etc...) is now warrenty voiding

- Assume all software you see may be violating the SDK license

"Built to be open". Yup.


A couple of points:

1) For virtually every MS SDK I can think of, everyone has to download it. I can't think of any where this isn't the case, although I wouldn't be surprised if there was a couple. That's standard MS practice.

2) I think those things always voided the warranty. In general, I think hacks, whether on Android, iPhone, Wii, Roombas, Dysons, or whatever will void the warranty.


2) the open kinect libraries do not in any way modify the kinect hardware, firmware or otherwise. there is no 'jailbreaking' involved here. its a -usb driver-!


Given that the device potentially draws power based in part on the driver, I think its reasonable. It's unlikely to cause any problem, but warranties are about expected use -- not to cover any use including those of hackers. They're NOT prohibiting hackers from using it, but if you brick it, then its yours to unbrick. That seems very fair.


Am I the only one that noticed a big BETA sign on the site?

I remember the iPhone SDK beta license even prohibited developers from discussing the SDK or exchanging ideas with others, thereby leaving no room for forums, newsgroups, open source projects, tutorials, magazine articles, users' groups, or books(same with iOS beta now,btw) and is restricted to people signing up for it.


Word on the street says that drivers for Kinect are going to ship with Windows 8.

I don't think that the new Metro UI shipping in Windows 8 is just about touch, I also suspect that the new interface is going to be gesture sensitive as well.

Windows 8 is looking very shiny.


I have heard this is the case from a few reputable internal sources. Windows 8 <3s Kinect


I don't understand why Microsoft couldn't have just endorsed OpenNI, fixed the drivers, created some Visual Studio/.net integration and been done with it. Would have spent half as much time in development, cost them less and given them the same exact product. And anyone, even without a Windows Machine or Visual Studio could have worked on this.

I definitely answered my own question.


The missing pieces of Minority Report are coming together.


If you couple this technology with Playstation 9, then yes :)


I'd love to try this, but the all-Microsoft stack required to use this SDK is a huge barrier to entry. I use a Mac, I don't know C#, Silverlight, Visual studio etc. In order to use this SDK I need to buy into MS tech on many levels. A browser plugin with a JavaScript API (even if IE only), would be very compelling and a lot more accessible.


Yeah that would be like Apple requiring Mac hardware, OS X, Xcode, and Objective-C to do iOS development...


I have the same criticism there too. I prefer multi-platform IDE's.


At least a PC can be got for $300 to $500, unlike a Mac starting at $699.


...neither of which is really the point. Even if they both gave you free hardware, you've got to learn their technology stack just to try it out. Unlike, say, web development, where a few seconds of text editing gives you a "hello world" that will work on any platform.


How is that not the point? For a for-profit business, to make a sale eventually, is the major point.


I bought the Kinect. I'll use it as I see fit. Is that not a sale?

The previous poster's point was that the open source software allows one to use the Kinect in a cross-platform, standards-compliant setting. Microsoft's coming out with something half as featureful, with serious platform restrictions. Their SDK's only advantages are "officialness" and a more complete audio API.


>they both gave you free hardware, you've got to learn their technology stack just to try it out

They don't, and there are lot of people, especially worldwide where Macs are less common and are much more expensive. Learning something is an investment of your free time, whereas hardware needs cold cash. Not exactly equivalent or comparable especially when not in the first world.


Sign up as a startup for BizSpark, and their entire stack is free: https://www.microsoft.com/bizspark/Startup/Signup.aspx

You can run it in a VM.


>Applications that are built with this SDK Beta must run in a native Windows environment. You cannot run applications in a virtual machine, because the Microsoft Kinect drivers and this SDK Beta must be installed on the computer that is running the application.

That's out...


Bear in mind the similarities between C# and Java. This will help if you are already familiar with the latter.

A small example:

http://en.wikipedia.org/wiki/Comparison_of_C_Sharp_and_Java#...


A browser plugin with a JavaScript API

You sound like a web dev, which is fair enough, but isn't Microsoft busy (for varying values of busy) trying to promote native technologies and discourage the browser as a platform view for obvious business reasons?


I am. And Microsoft also recently announced that Windows 8 UI's are HTML5/JS, and yet here they go and release a brand new Silverlight/C# API. This is one very confused company.


I keep a bootcamp partition for exactly these reasons, and there are free express version of visual studio for jumping in to these types of projects.



I remember my friend coded a program that Alt+Tab each time he moves his face to the right, and Alt+Shift+Tab when he moved his face to the left. It was awesome. :)


It's a shame that the license is non-commercial only. I wonder if they've got a commercial version available if you know the right people.


I'm not sure that's the issue.

For Microsoft to release a commercial license they have to be able to support it. That means relatively no bugs, staffing tech support, etc... I don't think Microsoft is ready to do that with Kinect.

In fact, I think this SDK is a response to the homebrew solutions that were coming out. I think Microsoft saw the focus shifting off them and onto the open source community and they didn't want to lose the good PR. Meaning this SDK was probably rushed out the door (and I'd suspect has some serious bugs in it)


While I concur to the premise, I must disagree with the drawn conclusion.

At the risk of speculating, I'd say MS need some more time before the release the commercial license, for the reasons mentioned by you (robustness of the code, staffing tech support etc.) and also to possibly see the reception of the software by the developer community, before deciding on commercial terms (It IS labelled beta afterall).

Edit - From http://research.microsoft.com/en-us/um/redmond/projects/kine...

> "This SDK is designed for non-commercial purposes only; a commercial version is expected to be available at a later date."


There's a commercial version still under development. I think I read it would be available late this year.


The sdk is still in beta, a commercial license will come eventually.


This is a nice SDK. It's biggest improvement over the Open Source ones is the inclusion of the skeletal tracking. This high level interface opens a lot of great opportunities in human behavior tracking.

I'm just going to miss libfreenect and using my Mac to do this development.


NITE allowed skeleton tracking for ages.


I really like that website design. Very different from what i'm used to from microsoft


That's the Metro UI(the UI in Zune, WP7 and Win 8) at work. More details here if you want to go in-depth.

http://www.riagenic.com/archives/487

http://www.riagenic.com/archives/493


Wow! That was really interesting. They're fit to be in a hn post of their own


I still think the Kinect technology is most interesting for the living room or public places (where you don't want ppl interacting with a physical object, if at all possible). Not sure I think my office is where it has as much use.


Can't watch the launch demo because I don't have silverlight! Not what I hoped for as the SDK, but as others have said I think its their most innovative product and what may in the end crack open an area for them to innovate in.


Didn't expect to see Anoop Gupta there. Very cool that he was invovled.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: