Hacker Newsnew | past | comments | ask | show | jobs | submit | encypruon's commentslogin

Reword a public announcement [1], slap on a misleading title, put it behind a cookie banner and paywall and boom - Journalism! "Bose is releasing documentation for EOL smart speaker HTTP API" would be more apt. Not even Bose is claiming that anything has been open-sourced in their statement. Titling the section "Open-source options for the community" is as close as they come to that.

Still, props to Bose for actively helping to keep their old devices usable.

[1] https://www.bose.com/soundtouch-end-of-life


> dot += A[i] * B[i];

Isn't it pretty bad for accuracy to accumulate large numbers of floats in this fashion? o.O In the example it's 640,000 numbers. log2(640,000) is ~19.3 but the significand of a float has only 23 bits plus an implicit one.


Python's floats are usually doubles by default, so it's mostly fine.

That said, yeah, that implementation isn't ideal. At a minimum, Kahan summation is usually free on large vectors (you're bottlenecked on memory bandwidth anyway), give or take the fact that you need to disable floating point re-ordering to keep the compiler from screwing it up and therefore have to order the operations correctly to make it efficient (see some other top-level comments about data dependencies as an example).


I'm extremely disappointed to learn that Firefox allows access to motion sensors by default and provides no option to make it ask for permission. It can only be disabled completely but this silently breaks legitimate uses like this one. This is a privacy nightmare. There is so much that can potentially be inferred from motion data, like user identity, mode of transport (maybe even location), what they're likely to be typing in the url bar or in an iframe, emotional state, a bunch of health information...

edit: Nice game, though. I wonder what it's made with. There seems to be a huge amount of generated shader code in the js. I wonder if that could be avoided.


After dismissing the final pop-up it will go on to proclaim:

  > Yes!
  > 80085 is 00000000000000000 in binary!


It does look like a lot of 80085 if you squint a bit.


It just converted the result to base 0, I see nothing wrong here!

Edit: meant answer not result


80085 in hex is OxTITS


My extremely tired mind: "holy f... wait..."


I did. I have an "Asus Zenfone Max Pro M1" (what a mouthful) and the volume is always too high and the audio ridiculously bad at "low" volumes. Changing some values with ALSA makes it almost bearable. The easiest way I found to do that was to root the phone, compile tinyalsa in termux and use this script to call tinyalsa with root:

https://pastebin.com/5f1jwpkb

If anyone has ideas how to do this without root, get around the issue of calls being lower volume or remove the dependency on tinyalsa and termux, I'm all ears.


Sorry, some objections...

> It's for deep learning, not that much "for graphics".

No, while it is true that there is some overlap between the techniques and concepts used, gaussian splatting isn't necessarily "for deep learning". The library provides a differentiable rasterizer for gaussian splats. This basically means that you can ask it "if I want my output image to change in this and this direction, in what direction should I change the position / orientation / color / ... of my splats?". This enables users to plug it into other software (that is also commonly used for deep learning) and to optimize the parameters of the splats to represent a particular scene.

Since it's primarily a differentiable rasterizer for splats I think it's fair to say that it is "for graphics".

> The problem is "how do you do 3D deep learning 3D scene reconstruction" aka "how to make 3d equivalent of stable diffusion".

That it uses gradient descent doesn't mean that it is "deep learning". There are no neural networks or layers here.

It's not an "equivalent of stable diffusion". The way it's used now is to learn a representation of a single scene, not unlike photogrammetry. Sure, there may be other use cases for this library, but this is primarily what gaussian splatting is about.


I think I finally managed to disable tracker-miner on xfce.

  systemctl --user list-unit-files | grep -o "tracker-\S*\.service" | xargs systemctl --user mask
...wasn't enough. What did the trick was unchecking the remaining two entries in the "session and startup" settings dialog. Good riddance. It was the number one reason for my notebook heating up and draining battery.


AFAIK with the Nokia N900 this was possible in 2009 using gstreamer. I never actually used it to create a video device but I assume that the v4l2sink was already a thing back then. I did use it quite a few times for streaming to remote windows and OpenCV.

Only good phone I ever had. I wish things were as easy on Android devices, but somehow they almost never are.


I'd really love devices & software that try to emphasize possibility & malleability. The modern consumer systems are rigid & huge on guard rails, making sure the user has a straightforward experience.

That's a hard task, worth of respect too. But it feels like this hardline conservative outlook for software has utterly dominated what's gotten built for a long time now, with less and less computing that is interested willing or able to give power users solid ground & footing. Tech keeps increasing the distance, keeps becoming more esoteric, ironically because it keeps losing on-roads to becoming an expert or explorer if you're interested in going further.


Actually sooner than that, on Symbian devices.


It renders fine on my end. Try checking if your browser is blocking cdnjs.cloudflare.com for some reason.


Yep, chrome works fine!


Great project and excellent presentation!

I've been trying to build something similar for tracking birds / panoramas / photogrammetry while avoiding using non-printed parts as much as possible but haven't been very successful so far. Because of that I'm particularly interested in the printed gears you are using. Maybe you could help me with some of my questions and concerns.

It looks like the whole thing is designed to be stiff, which means that the gears are pretty much at a fixed distance from each other. Doesn't that lead to backlash / play if the gears are a little bit too far apart or vibrations if there is too much pressure pushing them against each other? I've been worried about this and experimented with mounting the stepper motors on flexing parts to keep the pressure consistent and allow for more imprecision in the prints but I never really tried whether I can get away without that, so I'd be interested about how well this is working for you.

Pictures on the flexible motor mount idea: https://imgur.com/a/mM1Uql9

I would also like to know if there are any signs of wear after using the thing for a few days and what materials were used. I couldn't find anything on the type of filament on your website (did I overlook something?). I've only ever printed PLA so far and found that the gears would turn more smoothly after being used for a while. I imagine that the effect of wear might be different on the design without flexing parts.

Is it correct that the upper part is resting / sliding on "base-2"? Or is the bearing doing most of the work? Does it wobble at all? That was another one of the problems I had.


Hey there, thank you for the feedback!

Yes, this is completely stiff, no flexible parts. I've experimented with a few gears and different distances between them. I've found that this herringbone gear works very well. I don't experience any visible backlash, but the gearing ratio also helps to suppress that.

I've printed everything from Hobbyking Premium PLA. Just as you've described, the rotation gets smoother after some wear. But even after 30-40 hours of use they still don't seem to become loose at all.

Securing everything on the bearing, there's no visible wobbling. I really recommend using a bearing if possible, it makes things so much easier. (:

Good luck with your project! (;


Thanks a lot for the answers. And for everything else, too!


if you really care about backlash, just measure it and then add a correction in your motor controller.

Another way to do this is to always move back to endstop zero, and only move forward. THis is very limiting.

The biggest problem I had was all my PLA prints melted in the direct sun!


> if you really care about backlash

At 1200 mm equivalent a picture is only about 1.15° high, so accuracy is important :)

> just measure it and then add a correction in your motor controller.

I think your first idea doesn't really work for tilting because there is little friction and the camera isn't going to be perfectly balanced (the center of gravity also changes with the focal length and even the focus). When gravity pulls the camera towards one direction backlash is less of a problem, but when it's near the tipping point it becomes really shaky and sensitive to disturbances. Past the tipping point it is biased towards the other direction.

Preloading might be an easy fix but at the cost of acceleration.

> Another way to do this is to always move back to endstop zero, and only move forward. THis is very limiting.

At least for panning in panoramas this isn't a bad solution at all. I think both your ideas are good for panning with planned paths.

> The biggest problem I had was all my PLA prints melted in the direct sun!

Was it white PLA? So far I've only had stuff bend out of shape in the car.


I've built a similar device to this (originally for a solar tracker that keeps the sun at high magnification in the middle of the frame, but it turns out to be ideal for panos as well).

It's a DSLR with crop sensor, 300mm lens, stepper+gear for azimuth and stepper+belt for altitude (with gearing to get more torque for both). I used this system with Hugin to assemble 180 degree x 180 degree, 5 degree azimuth and altitude steps (so, about 5X your resolution, I don't have the same zoom as you). Results are great- I can give this straight to hugin, update a PTO file with my known angles, and it assembles without any futher optimization. I do pan with a planned path.

If the belts are real tight, backlash is neglible (seems to be a lot smaller than any gear I could print). If you're having that much backlash on tilt (that it's shaky and sensitive) I think you might need to re-engineer the frame to be more stiff (which will cause the shakes to dissipate faster) or tune your motors (tuning velocity and acceleration). You might also be able to use a spring to keep the gear in place. I tried to get a 3D-printed worm gear working, but never made it compact enough to be practical for the altitude.

I had white PLA, in direct sunlight/90F ambient for hours, and it warped. I only noticed because I was inside watching the incoming images and the altitude staged started to "melt" causing massive positional losses.


Good results without optimization at 300 mm sounds pretty impressive. Maybe belts are the way to go. Thanks for taking the time to elaborate!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: