Hacker News new | past | comments | ask | show | jobs | submit | mattybrennan's comments login

Civis Analytics | Full Time | Remote (US) | DC, Chicago | https://www.civisanalytics.com

At Civis Analytics we bring objective, data-driven truth to organizational decision-making—all the way from the boardroom to the world’s largest social causes.

Our mission isn’t an aspiration, it’s something we see realized every day. We combine our SaaS products with specialized data science consultancy to enable and support data-driven decision making in organizations like the Bill and Melinda Gates Foundation, City of Boston, and progressive political campaigns.

Our product organization is customer-centric and leverages human-centered design practices to help build and improve our product experiences in ways that delight customers while strengthening our overall business.

We're hiring for lots of roles! A partial list: - Senior Software Engineer - Senior FrontEnd Engineer - Software Engineer II - Tech Lead - DevOps Engineer II - Senior DevOps Engineer - Engineering Manager - Director of Applied Data Science - Data Scientist - Client Success - Solution Architect Lead - Senior Product Manager

Apply online here https://grnh.se/266dd00b1


I am interested in the director role, but do not see it on the website. Let me know who I should contact.



I'm not sure I understand why the tree had to be climbed to be measured. Isn't that what trigonometry is for?


Yes, and no. Trigonometry works if trees grow with a habit which matches the assumptions of a right triangle.

Before laser rangefinders were cheap and accessible, the method used (angle and an approximation of horizontal distance) would often grossly overestimate the height of the tree.

The modern sine method is much better, but still had problems with leaning trees, and will tend to underestimate height.

The most reliable method is to get someone up there and do a tape drop.

https://www.nativetreesociety.org/measure/tree_measuring_gui...


Seems like a great job for a drone.


You could use a theodolite. But remember that the ground is not flat. So you need to measure the vertical difference in height from the base of the tree to the theodolite. And you need to measure the horizontal distance, which would require other measurements in other locations. And you need a location where you can clearly see the top of the tree. This is all a bit complicated for someone without topographic surveying experience.

A tape measure is simple and easy to do accurately. And of course they will want to climb the tree anyway.


Just got an alert about 911 being out in MA. A bit belated, if state police twitter is correct that it was fixed 3 hrs ago https://twitter.com/MassStatePolice/status/10786404842292224...


MEMA just tweeted that it is still ongoing, I was confused about the MA State Police tweets hours before the alert too https://twitter.com/MassEMA/status/1078694654604922885


Same here....I’m in Brookline, where as the MSP stated the earlier outage were for towns much west of Boston.


Yup.


I mean, this calories in calories out study is in the same NYT issue: "Exercise May Aid in Weight Loss. Provided You Do Enough." https://nyti.ms/2KMz7f9


No, we weren't. This is something carefully measured by economists. The workforce is consistently becoming more productive, albeit at a slower rate

https://www.bls.gov/opub/btn/volume-6/below-trend-the-us-pro...


It's not weird that an elected official wants to advance their agenda. What's weird is that precinct judge is an elected position in Philadelphia.


That is literally the point of it


Gotta Wait for the "Louis CK moment," where the technology it's so boring that people are annoyed by it

http://www.cc.com/video-clips/avmzsg/stand-up-louis-ck--give...


All the necessary hardware? Do Teslas have lidar?


They don't work in bad weather conditions like rain. So auto driving systems can't rely on LIDAR what rules it out as main sensor.


Lidar is not "necessary" hardware. Humans drive just fine without lidar.


It's pretty obvious what the poster meant. Why doesn't tesla need lidar? Cars aren't humans, so that comparison isn't really helpful.


What the poster seems to have meant is that LIDAR is essential hardware. Which is not some universal truth, so it is they who should justify it.


Visual cameras can be blinded fairly easily - even high-end cameras - simply point such a camera at the sun and try to make out a cloud in the sky. If Tesla were using true high-dynamic-range cameras (e.g. Oversampled binary imagers) then I would be more confident - but Tesla isn't saying that they are - and if they really were they would definitely boast about it.

LIDAR also does work great in the rain - provided you have multiple LIDAR units (e.g. Ford's snow-proof sensor array: https://qz.com/637509/driverless-cars-have-a-new-way-to-navi... ).

What I like about LIDAR is that it will never give you false-negative data regarding object proximity (i.e. it will never tell you an obstruction in the road is not there) but visual-only cameras can be fooled very easily and definitely can give you false-negatives regarding road obstructions.

It seems inherently less safe to rely on a more homogeneous sensor array: conversely it makes sense to use as many different types of sensor as possible to ensure your design isn't susceptible to being brought down thanks to a weakness in your predominant sensor type.


Lidar is absolutely useless for level 5 autonomy given that doesn't work in "bad" weather. It may be useful for level 4 autonomy but for sure it is not necessary even then. Source: no car driven around the planet at the moment has any lidar apart for testing purposes.


The human visual cortex is still far more advanced than a computer's. Maybe that will change some day, but asking about non-camera sensing equipment is a reasonable question until it does.


Sure, and it's not only cortex. Humans also know a lot about the world, and can predict things much better. There are areas in which humans are limited, however, such as reaction time, spectral sensitivity, the fact that humans can only see well in the center of where they're looking at and fake the rest of their visual field, etc. It's not at all clear to me that a human pair of eyeballs is better than 8 high quality, wide spectrum cameras feeding into the system at once.

That said, I think it's disingenuous of Tesla to call their system "autopilot" or imply autonomy of any kind when talking about their system. I will call something autopilot when it can drive me from door to door without me touching the wheel, in less than friendly weather conditions. Not drive in a straight line where it never rains.


You can have a 1000 cameras and it still doesn't matter if you don't have a good computer to process the data. That is where machine learning, etc comes in. Machine learning is really starting to come into its own in last few years especially for image recognition but it still sucks compared to humans. That doesn't mean self-driving based on cameras isn't good enough but it needs to be proven and we aren't there yet.


Dude, I work on high performance machine learning and machine vision 12 hours a day. You don't need to tell me it sucks, I know. But it's superhuman on some tasks already, and in a few short years, it'll be superhuman on a few more, and little by little it'll get there. All you get from that lidar is a depth map. You can do that without a lidar, using two or more cameras. If you also interpolate across a series of predictions, and have sensor fusion (which Tesla does, they also use radars and ultrasonic sensors) you can even make it robust. Truly, people who say it can't be done should not interrupt people who are doing it.


Not really, humans are horrible drivers with miserably slow reaction times.


Some humans are horrible drivers. Some humans are good drivers. The biggest difference is often how proactive the driver is.

It's true that a careful, experienced driver will typically recognise a rapidly emerging hazard as much as several tenths of a second faster than a novice, giving them significantly more time and space to react. However, a careful, experienced driver will also anticipate places where there are likely to be hazards and adjust their driving style to compensate.

Does a self-driving car know that there's a park just round the corner and it's half an hour after the local kids came out of school, thus increasing the risk of a child chasing a ball into the road?

Does a self-driving car understand that the group of people standing quite near the road up ahead are outside a bar at 11:30pm and thus quite likely to be drunk and suddenly stagger into the road?

Does a self-driving car know about the pothole in the cycle lane that you had to avoid while riding into town yesterday, and anticipate that anyone riding in that cycle lane today may move out into the main traffic lane without warning to go around it?

Does a self-driving car know that the news last night reported on a local black spot for "accidents" caused by people wanting to make fake insurance claims, and decide to take another route that is a little slower but avoids that black spot?

Better sensors, fast data processing, and the ability to monitor all sensors all the time are big advantages, for sure, but these things mostly support reactive behaviour. I've seen nothing so far to suggest that the better reactions currently outweigh proactively avoiding or mitigating these kinds of hazards in the first place. Obviously that might change in any number of ways in the future, but we seem to be a long, long way from that point yet.


a) Most humans aren't aware of these things, either, so they're really non sequiturs at best. b) Even if you accept them as valid premises, it's much easier to disperse this kind of info to every car on the road than it is to disperse it to humans (every tourist in a city needs to know where every bar / park is? Or watch the local news?)


Those were all real examples. These kinds of things happen in my area every day, and drivers are actively taught to look for signs like these before they are allowed to qualify and drive independently. Obviously not everyone gets the message, and the best anticipation skills only develop with more experience, but nothing I described was unusual or exceptional (other than the last one, which was quite a specific example of a more general idea).

On your second point, the important thing here is that you don't need to disperse much of this information to humans. Humans automatically recognise situations based on all of their experience, not just their driving experience. Of course sometimes external information sources like the news might be helpful, but much of it is just down to understanding context. See fresh horse crap on a country road? Someone's probably riding horses nearby. Horses scare easily. So, slow down and try to avoid anything noisy that could startle the animals. How many of today's self-driving algorithms take into account this kind of implied knowledge?


Humans have the computation equivalent of 38 petaflops of processing power. Does a Tesla vehicle?

If you seriously want to play the inane game of "well if a human doesn't have it then a Tesla doesn't need it" then let's play that game and talk about the things humans have that the Tesla lacks.

What's interesting about human's vision system is that the human eyeball is, relatively speaking, poor. We have digital cameras far better than that already. It is what the human brain that does with that raw data which makes us, as a species, thrive. Most of what we believe we "see" we never actually see, our brain fills in the gaps dynamically and infers information over time.

So this human processing ability, much of it automated rather than conscious, is totally relevant if you want to have this "Tesla Vs. human" debate. It is also why Lidar might be needed to make up the massive shortfall in a Tesla's processing ability relative to the human brain.

But hey, you want to keep to the "but HUMANS don't need it" then I ask where is my 38 petaflops and 1 TB of memory...


You don't need 38 petaflops to drive a car. We are wasting our minds driving. Driving doesn't need creativity, it needs the 360 degrees of awareness without any lapse in concentration and the ability to react in milliseconds.


I would peg the human brain at closer to 1 flop. That's about how many floating point operations I can do in a second, and only very simple ones.


You do more than that when calculating the trajectory of a ball thrown that you have to catch. Just not with numbers.


Humans also lose focus, fall asleep and get tired.

Teslas have multiple radars for judging distance and multiple cameras that are used for stereo disparity. Also human 38 teraflops is not the same as nvidia teraflops.

I am not saying teslas are better than humans, I'm just saying teslas can drive on I5 highway from Vancouver to Mexico better than I can.

Also Lidars are really really expensive, I applaud Tesla and commaai for breaking major ground just with cameras. Convolutional neural nets have being doing phenomenal things in the past few years.


I'm curious where you get the number "38 petaflops" from.


IBM researchers: https://blogs.scientificamerican.com/news-blog/computers-hav...

You can find other figures, but many are in the petaflop range, well above what could be realistically installed in a vehicle.


How about 60 bits/s:

https://www.technologyreview.com/s/415041/new-measure-of-hum... I don't think we know enough about how the human brain works yet to give a precise value, but just on caloric arguments I would say that the mean processing power of the brain is not significantly above what we have now in general purpose computing devices.


In the same sense that your dog solves differential equations when he catches a Frisbee, I suppose.


It's memory + control driven with visual feedback, not much more. You don't have to solve anything if you already sorta know the solution, and can adjust it for the goal.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: