Hacker News new | past | comments | ask | show | jobs | submit | more kefka's comments login

Ok. So, how many of these are Facebook Ghost profile users?

https://spideroak.com/articles/facebook-shadow-profiles-a-pr...

Oh yeah, you think you're safe because you don't have a Fb account, or closed it when it was popular to do so. Guess again... Your "friends" will keep posting you and they keep building a profile.

Better yet, I suggest people keep their Fb account, and pollute it and lock it down. Don't install the phone apps. Ban their emails with @facebook.com and @fbcdn.com . The long and short; ghost them and don't be the product.


One of the most frustrating (and shady) things I have seen Facebook do is ask your friends personal information about you. I have very little personal info on my profile, but I know that Facebook has all of it. Because I have seen little popups saying things like "Where did your friend so and so go to highschool?"

Then later, you will see a question on your profile like, "Did you go to ... High?" So you know they are storing it whether you confirm or not.

They will also ask "security" questions to people trying to reset their password. They seem to include a few things that are in Facebook and then a few items that are not. This gathering even more details about my personal life that I did not choose to share with Facebook.

I try to frequently seed it with false information. But I know plenty of people are out there just merrily giving up all my personal information to this beast.


These are (monthly/daily) active users, not total number of profiles.


Long and short, motors store power in magnetics when electricity is flowing. When its not, the coil loses its magnetization and turns back into electricity.

Now think of all the electricity as one big wave. Cause that's what it is. If you were running 12v , you can see upwards of 30v surge. This is bad.

The key is to equalize the power on the motor. And that's done by putting a diode in the reverse flow across the terminals. So that big flow of electrons can equalize itself BEFORE hitting other silicon (like the MOSFET or your RasPi).

Ideally, you want to do this for motors, electromagnets, solenoids, and inductors(well, unless you're doing an L-based filter , but aside the point). They all have this magnetic energy->electricity->surge thing going for them.


Well, it's now immortalized as a hash in IPFS. Good luck censoring that.

http://gateway.ipfs.io/ipfs/QmeLsfKxF4dhmyX2FSGotaDPmMEqe8p3...

UPDATED HASH: http://gateway.ipfs.io/ipfs/QmUayNU49TWHMid6pSEBSKPAHsxJkTnd...

There's still a few hits going to Tumblr and fonts.gstatic.com , but all the actual content is safely inside IPFS and being served from in there. I'll let someone else rip the external calls out.


It tries to load the images (that Zillow is purportedly threatening the author over) from Tumblr though...


Tumblr caches everything for a looooong time.


Be that as it may, it's kind of weak to say that it's hosted on ipfs if the majority of the content is actually not on ipfs and on tumblr.


Not a deep copy. Those images are still hosted via tumblr.


I fixed it. I thought when Firefox saved the Web Page complete, it was doing inline uuencoded binary as part of the the image tag. I was wrong.

There's still a few hits to the 2 sites I updated, but no real content. The text and images (what actually matters here) is all IPFSified.


Looks good to me now.


Well..

I could see a path forward, but it would be quite a bit of CNC like automation. Primarily, I can see a hydroponics setup being used with a Farmbot.io style automation. With this, a bot would zip up and down powered rails, and use Wifi to communicate with the base station. Buckets would be provided for the toolhead to pull plants done growing.

Ideally, this could be used to grow herbs, lettuces, and the like on huge racks. Of course, in the jurisdictions that allow it, could be modified for cannabis as well.

The long and short, is that machine vision can be utilized to determine the fitness of a plant and its finished growing season. Add tilapia to this and you have a near closed-loop biological system. And then you can also sell fresh live tilapia.

To give an example, 25 sq ft of this system provides enough food for a human indefinitely.

This also localizes food production thus strengthening national food security. And this one is what wins a lot of conservative types who want to see a gain rather than "feed the fuzzies" kind of reasoning.


> Of course, in the jurisdictions that allow it, could be modified for cannabis as well.

I suspect this is the actual reasoning and target product.

Unfortunately, you can't get funding for that legally in most jurisdictions.


Well, yeah. Shrooms and Weed are the two I could think of that would easily support hydroponic grow systems. Ideally, I could see them taking off in the Southwest, given intensity of the sun and win power. The only thing that would really cost is water, and that should be closed-loop as much as possible. I'd wonder if some of the new metal-organo-frameworks could efficiently get water from the air as they were reported to do.

But republicans are still moralizing christians that wish to force their ideals upon others. Because if anything's been shown, cannabis is one hell of a tax generator and mucho underutilized commerce. But aside this illegal usage in most jurisdictions, this could suffice growing in bad environments or to people with way too much money to spend.

There could be a different segment; people who want guaranteed food supply. And locally, this could make food safety a guarantee. I could also see this as a non-profit as well, with low income and homeless getting free access.


And locally, this could make food safety a guarantee.

No it doesn't.

If a farm has a bad harvest, there are other farms a store can buy from. If the store has a bad harvest. They're competing with their own customers to buy food from the next store over.


How the hell would you use a hydroponic system to grow any type of mushroom, that makes no sense


Hey I am building something of this nature but a much simplified version. It's www.kokonaut.com


Neat. But a friendly word of warning.

You've got DHT11's pictured in you product photos. Don't use DHT11's for, well, anything.

I deployed ~ 100. After 8 months, only 35 still work. Even when they worked, the data was so variable they were constant trouble. Of the 35 that still work, about half are now giving useless humidity readings.

Caveat emptor and all that yap.


Have you found an alternative that works more consistently?


A little background. I made essentially exactly what he's selling as a garden monitor to place in a few dozen machine cabinets and motor compartments in a newspaper printing plant. Printing presses are hot, wet things and you can predict failure if things get too hot, or too humid.

So bare ESP8266 modules, OSH Park boards, and the sensors. After my DHT's started kicking the bucket two things helped:

1) Switching my software to only power my sensors for the time I needed to take measurements. (Once per 10 minutes). It does take some seconds to stabilize before taking the reading but the sensors spend most of their time off. I had to cut traces and add a little transistor to the boards to make this happen. That sucked.

2)I picked up some SHT71's and was able to bodge them onto the failed boards in place of the dead DHT's. All it took was a firmware update for my ESP8266. This was easy. I have not lost one since. I don't know how much 1) has to do with that.

As an added benefit, I've got my esp's deep sleeping during the off times (had to add one jumper wire to make this work... easy) and can now power them for weeks at a time with a pair of alkaline batteries.


Deep sleep is definitely the way to go :).

For Sensor Comparisons, check out this site. Someone from a previous post on Hackaday turned me to it a few months back. Very comprehensive and is what led me to go with the BME280:

https://www.kandrsmith.org/RJS/Misc/Hygrometers/calib_many.h...


Very true.

Those were the old versions. The new versions are using bme280.


My critique.... You can look me up as kefka in your system as well. I had a chance to play quite a bit over the weekend :D

1. The tutorial could use significant tightening up. compact it to 1/4 of the text/messages/popups. Think of this as a grind, get them through fast to start playing.

2. Allow undoing moves until the "End Turn" is done. More than a few times, I mis-clicked move on the same square a piece was, and effectively did a NO-OP. Even if it was a global-undo that reset the turn's condition would be better than nothing.

3. For early-mid game, mages seem vastly overpowered. I was expecting a rock-paper-scissors mechanic but Mages seem to blast anything under Centaur away... Even the knight hiding in the mountains gets slaughtered.

4. Let me hit my own units. Sometimes when a pile-on is going on, I want to off my 3hp horse to move in a Giant.

5. Too many catapult types. There is a difference between them, but not really. Perhaps the Fire-catapult could be a area-of-effect fire bomb, including hitting your own units?

6. Sound would be nice, but lower priority than these.


> IPFS and blockchain are technologies that are built on top of the internet - they assume a network connection already exists.

I wont speak for blockchains. Call them by the boring name as "Append-only databases with a consensus mechanism on what to add, with a proof-of-something to affirm that work of some soft was done".

IPFS is different. They have already planned that IP4 isn't the next thing. Or IP6, or IP8, and on. They created what they call a MultiAddr that encodes the protocol definition to explain to peers and IPFS what protocol stack to use, and then lay IPFS on top of that.

Obviously when a new protocol comes into play, they add a new multiaddr type for the new protocol, and off you go.


> I wish machine learning research didn't respond so strongly to trends and hype,

It's really because nobody actually understands what's going on inside a ML algorithm. When you give it a ginormous dataset, what data is it really using to make its determination of

[0.0000999192346 , .91128756789 , 0 , .62819364 , 32.8172]

Because what I do for ML is do a supervised fit, then use a next to test and confirm fitness, then unleash it on untrained data and check and see. But I have no real understanding of what those numbers actually represent. I mean, does .91128756789 represent the curve around the nose, or is it skin color, or is it a facial encoding of 3d shape?

> I'm still wondering what, if anything, is going to supplant deep learning.

I think it'll be a slow climb to actual understanding. Right now, we have object identifiers in NN. They work, after TB's of images and PFLOPS of cpu/gpu time. It's only brute force with 'magic black boxes' - and that provides results but no understanding. The next steps are actually deciphering what the understanding is, or making straight-up algorithms that can differentiate between things.


Shouldn't it be possible to backpropogate those categorical outputs all the way back to the inputs/features (NOT weights) after a forward pass, to localize the sensitivity of them with respect to the actual pixels for a prediction? I imagine that would have to give at least some insight.

Beyond that, the convolution/max pool repeated steps could be understood to be applying something akin to a multi-level wavelet decomposition, which is pretty well understood. It's how classical matched filtering, Haar cascading, and a wide variety of proceeding image classification methods operated at their first steps too.

CNNs/Deep learning really doesn't seem like a black box at all when examined in sequence. But to me at least, randomized ensemble methods (random forest, etc.) are actually a bit more mysterious to me in their performance out of the box, with little tuning.


I'm in no way a researcher or even an enthusiast of machine learning, but I'm pretty sure that I came across a paper posted on HN a few days ago that did exactly what you and the parent poster are describing, figuring out what pixels contributed most to some machine learning algorithm. I'll try and see if I can find it.

Edit: yep, found it.

SmoothGrad: removing noise by adding noise, https://arxiv.org/abs/1706.03825

Web page with explanations and examples

https://tensorflow.github.io/saliency/

I couldn't find the HN thread, but there was no discussion as far as I remember.


Bagging and bootstrap ensemble methods aren't really that confusing. Just think of it as stochastic gradient descent on a much larger hypothetical data set.

The effect is same one that occurs when you get a group of people together to estimate the number of jelly beans in a jar. All the estimators are biased, but if that bias is drawn from a zero mean distribution, deviation of the average bias goes down as the number of estimators increases.


I think you might be on to something, but the big problem here is that the Input is hundreds of GB or TB's . It's hard to understand what a feature is, or even why it's selected.

I can certainly observe what's being selected once the state machine is generated, but I have no clue how it was constructed to make the features. Do determine that, I have to watch the state of the machine as it "grows" to the final result.


It already does. Check out OpenBazaar's patches that include Tour and I2P functionality.


That's what I was referring to: their Tor support is not in mainline IPFS yet, AFAIK, although it should be soon.


Yeah, primarily, OpenBazaar did it a way, and the #ipfs team in freenode isn't sure if that's the best way.

It primarily has to do with Kadmelia and DHT. How does one consider an adapter or transit to IPFS as a "secure don't leak other addresses"? Does one run a secure IPFS for Tor and I2P? Should it be integrated with a flag on those interfaces?

The IPFS team wanted to get everything else settled, protocol-wise before going down the idea of secured, and hidden protocols, given IPFS's propensity of splattering all interfaces through it (even unroutable internal network addresses).


>• Stupid Question 3: Double the cube.

     (X^3) * 2  -- Done.
>• Stupid Question 4: Square the Circle.

     r is known. X is not.
     pi * r^2 = X^2
     sqrt(pi * r^2) = X , for positive X
oh, geometrically? No. Algebraically works cleaner, and for any arbitrary positive solutions for r.

>Its central dogma is thou should prove everything rigorously.

That's not a dogma. Its a proof because anyone, no matter whom, no matter when, or where in the universe, can duplicate these results and show they are logically true. Or they can show the results are logically false, no matter the inputs given.

It's not "dogma", as some high edict by a Pope or something. A rank amateur could further the field by proving a new theorem - because the person doesn't matter. The soundness of logic does.


> That's not a dogma. Its a proof because anyone, no matter whom, no matter when, or where in the universe, can duplicate these results and show they are logically true. Or they can show the results are logically false, no matter the inputs given.

It is not even only that. Rigorous proofs show the limits of your knowledge. Modern Math is a huge edification that we would be completely unable to build if we based it on intuitive semi-rigorous fundaments.

Yes, proving things is boring, and won't add anything to your immediate problem. No, we still need it, like we need many other kinds of investment.


> oh, geometrically? No. Algebraically works cleaner, and for any arbitrary positive solutions for r.

I hope you're not being serious. Just in case you are, your algebra is wrong. I'm quite certain you didn't look up what "doubling the cube" means, since the (faux) algebraic solution is y = cube_root(2 * x^3). It undercuts the rest of your comment.


Doubling the cube was being ridiculous to prove the bad language to explain the problem.

If it really meant doubling the volume of a cube of X unit size, then absolutely it's (2 * x^3)^(1/3)


I don't think the author was trying to give a precise description of the problem. "Doubling the cube" is a term of art. It's like if he used the word "derivative" and you thought it meant a cheap copy of something, and then went further to prove how silly calculus was because of your misunderstanding of the term.

You're also selling the problem short. Doubling the cube is about producing a finite algorithm (given a limited set of operations) that realizes the value of (2 * x^3)^(1/3) concretely. An algebraic solution does not do this, because it stops at the inability to realize, say, the cube root of 2 explicitly.


It is effectively uncensorable because the world is not 1 government, but 200 something. And no matter how much kicking and screaming the US does, not every country will comply.

It also uses the fact that the IPFS cache is also used to deliver content to others, similar to the way Bittorrent allows downloading and uploading based on blocks of content.

There still has to be someone hosting it initially to spread. But even that can be done over Tor, given the patches from OpenBazaar and IPFS.


> not every country will comply.

Then host it in one of the non-compliant countries, no need for IPFS.


Then the other countries can just block that host.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: