hacker news with inline top comments    .. more ..    24 Jan 2016 News
home   ask   best   2 years ago   
Virginia Tech Professor Spent $147k to Help Uncover the Flint Water Scandal attn.com
154 points by rmason   ago   42 comments top 11
1
olympus 19 minutes ago 4 replies      
Weird that nobody has mentioned this yet: The article claims that Prof. Edwards spent $147k from his discretionary research funds and personal funds. He's still a good guy and has probably used a non-trivial amount of his own money, but it's not like he had to pull all of it out of his bank account. Discretionary research funds are given to researchers as money for them to spend on whatever research they find interesting. It's kind of like 3M's 20% time. The article doesn't mention how much of the $147k came from his research money and how much came from his pocket book. Why? Because it's not as sensational to find out that this guy (who is a tenured professor at a big school) put up $125k of research money and $22k of his own. I don't know what the actual numbers are but I wouldn't be surprised if he put up less than $50k of his own money and had the vast majority come from research funds, considering his history (wikipedia claims he received a $450k grant in 2011 from the EPA to study lead and copper in the water).

Again, I'm not saying he's a fraud. He probably still put up a decent chunk himself. He did take out a mortgage on his house, but it doesn't mean that he put 100% of the value of his house towards this crisis. All I'm saying is don't get caught up in all the sensationalism.

2
nemild 1 hour ago 1 reply      
Is anyone else saddened that it takes a personal mortgage by a passionate professor and his team to figure this out? And that even after they proved it, a GoFundMe campaign is the only way to recoup the funds?

With the agencies involved dragging their feet, was there no other way to get someone else involved? Is there a whistleblower fund like the SEC has: https://www.sec.gov/whistleblower that gives them a portion of the impact they had (10-30% for amounts over $1 MM in the SEC's case)?

3
gjmulhol 2 hours ago 3 replies      
Wow, I was under the impression that the Flint situation was primarily driven by bad policy put in place by people who didn't really know what they were doing, but it seems like there was a lot more nefarious action than that.
4
kevindeasis 35 minutes ago 0 replies      
I've heard from my colleagues that there are leaked emails around on how they chose to switch suppliers. Even though they knew the lead contamination.

Anyone have a link of the said email dump? Other links are broken

5
jrjr 34 minutes ago 0 replies      
City of Detroit's FAULT.The attempted gouging of Flint for their water supplyis the cause, who was that person that allowed thatattempt ?

That is the cockroach that needs the light shown upon them.

follow the money.

jrjr

6
DanielBMarkham 42 minutes ago 0 replies      
7
pfarnsworth 41 minutes ago 2 replies      
This is sickening. How this could occur in 2016 needs to be studied and eliminated with prejudice. To think that government officials look at that orange water and think that's normal. It reminds of the Monsanto lobbyist that claimed that RoundUp is perfectly safe to drink, but then instantly refused to drink it when offered. I wish there was a special place in hell for people like that.
8
ck2 1 hour ago 0 replies      
Remember they switched to flint river to save a "whopping" $1 Million per year.

They are probably costing $1 Million per week now in bottled water.

Imagine what the special needs kids caused by this will cost taxpayers for the next several decades.

Even without the lead leeching issues, everyone knows the Flint river was an industrial dumping ground for decades and was very toxic - no amount of filtering would have made it safe for constant consumption.

9
crimsonalucard 1 hour ago 1 reply      
>"[It's a] trivial cost compared to the damage we prevented," Edwards said in an email to ATTN:. "Best investment we could have made into society."

In the world we live in, people only invest in themselves and society may or may not benefit as a consequence. This type of selflessness is rare and unsustainable.

10
dghughes 1 hour ago 1 reply      
I was in Scranton Pennsylvania in 1999 and was horrified at the water there, the water was yellowish orange.

My guess is many former industrial regions in the US will have heavily contaminated water.

11
droithomme 1 hour ago 6 replies      
They claim the professor and his students did 6000 hrs of work testing samples (that's an especially round number), which they valued at an average of $27.74/hr even though everyone was a volunteer and no one was asking to be paid. This is how they came up with an estimate of a labor value of $166,487 for this voluntary work for which no students who did the work were paid or asked to be paid.

It is extremely misleading and deceptive here to say that the professor spent $147,000 because he did not spend $147,000 at all. He spent 11200+3180+50 = $14,430 and he received 32843+200+500+200 = $33,743 in grants and fees he charged for speaking, for a surplus of $19,313.

Furthermore, the money being raised is going into an account supervised by the lead. It is not being used to pay the people who did the work for their time. This is especially outrageous that he is collecting money for their volunteer work and keeping it for his own use.

Shadowy tech brokers that deliver your data to the NSA zdnet.com
113 points by morisy   ago   20 comments top 5
1
jlangenauer 1 hour ago 0 replies      
What?
2
mtgx 3 hours ago 1 reply      
I wish we pushed for a law that said you can only serve a warrant for a data request to the person whose data you want. I think from a human rights principle it makes the most sense. The only reason the government can take it from third parties that store our data for us is because it's "easier" for them to do that, and because there hasn't been enough pushback against it.

Imagine if the government said "hey, that money in your bank account, we can just automatically take our taxes from it, because we're not really taking it from you, we're just taking it from the bank." Probably not the most accurate analogy, but I think you see where I'm going with this.

Since the ruling that invalidated Safe Harbor, Microsoft has been pushing for laws and agreements between nations that say law enforcement shouldn't be coming to Microsoft (as a cloud service provider) with a warrant for data requests, but to their corporate customers. So for instance, if FastMail uses the Azure cloud service, they're saying that if the government wants access to a user's data, they shouldn't be going to Microsoft but to FastMail with the warrant.

It's a small improvement, but Microsoft and all of the other companies should be pushing so this works for all of their customers, not just the corporate ones. It's exactly the same principle, but Microsoft just takes the easier way out here, because that still gets them off the hook, and it's really what they care most about. The corporations (even if they are "people") shouldn't be having more rights than actual people.

3
awongh 4 hours ago 0 replies      
They reference this article, which was also interesting: http://www.buzzfeed.com/reyhan/the-most-important-tech-compa...
4
peter303 1 hour ago 1 reply      
I've always presumed that ALL of my online activities are visible to the government or other nosy parties. I do stupid things sometimes, but not really stupid.
5
purpled_haze 1 hour ago 4 replies      
Two things:

(1) The NSA is there to protect us, in theory.

(2) There will never, ever be guaranteed privacy or security as long as you continue to use other people's equipment and network. Using the internet expecting privacy is like continually saying, "I'm going to have sex with everyone and going to complain about the people that have STD's."

You could try: https://en.wikipedia.org/wiki/Wireless_mesh_network

Or: https://en.wikipedia.org/wiki/Sneakernet

Those are a little safer.

You Need More Lumens meaningness.com
161 points by ivank   ago   93 comments top 23
1
sambe 4 hours ago 5 replies      
A recent Cochrane review found very little high quality evidence in either direction:

http://www.cochrane.org/CD011269/DEPRESSN_light-therapy-prev...

It's not clear to me why light therapy is considered as a well-researched treatment.

2
windsurfer 5 hours ago 2 replies      
This review for the linked led strip is interesting:

This is a perfectly usable (and subjectively bright) LED strip, but beware, the specified lumens are a lie. My power meter reports a draw of 33W and another reviewer reports ~38.6W. The maximum efficiency of a 5730 SMD LED is ~110 lumens/watt, so this strip is outputting at most ~3630 lumens. Seller's description of "45-50LM" per LED would lead you to believe this can output 15000 lumens or ~454 lumens/watt. This is far outside the world record 200 lumens/watt bulbs and even outside the theoretical physical limit of 250370 lm/W.

3
jensen123 5 hours ago 2 replies      
I live in Norway, and used to have a big problem with SAD and longer-than-24-hour-cycles. I tried a few of the commercial lamps like he did, and my experience was similar. They helped some, but not enough.

My current solution is to take a 1 hour walk outside every day around noon. Also, I've stopped sitting in front of my computer or TV 3 hours within bedtime. Instead I usually spend my late evenings reading paper books.

This has solved the longer-than-24-hour-cycles and most of the SAD (it also seems to have completely cleared up my acne). However, on days when it's overcast and kinda dark, I still notice some SAD. On those dark days, the kind of lamp that he built probably would have been nice.

4
buro9 4 hours ago 2 replies      
You could just purchase stage lighting.

Stage lighting fixtures use the halogen metal iodide bulbs that he salivates over at the end, and already solve all of the issues he outlined. They provide their own ballasts, are metal shielded, use a lens that acts as a UV shield, have built-in cooling.

In fact the only issues with stage lighting:

1) The cooling wasn't designed to be silent (it isn't expected to be near someone in a near-silent environment)

2) The lamp casing wasn't designed to be near anything flammable (they get very hot)

3) The lens and casing is designed to throw the beam in a very small angle of spread over a reasonably long distance (they're not designed to point at your face from a few feet)

But given that, it seems reasonable that one could put it farther away and reflect it into the space you want lit.

And if he really wanted to go crazy whilst staying with LEDs, then he could just get a few of these: http://pulsarlight.com/products/chroma-range/chromaflood200/ which are used in architectural lighting and each one produces 10k lumens, and they are safe for indoor and outdoor use, are waterproof, and can be driven from standard mains power.

5
mapt 2 hours ago 2 replies      
It is perfectly within our reach to simulate a bright daylight sky with the sun behind clouds - around 5k-10k lux - it just requires a fairly large wattage of lighting in a number of indirect fixtures. White metal halide, LEDs, and fluorescents all have efficiencies in the 50-100lumen/watt range, that just means you need to apply around 100 watts per square meter of matte white ceiling you're trying to illuminate. I like the idea of supplementing the white lighting with colored (eg: red, red-orange, green) LEDs, which can be throttled to achieve a particular color temperature.

Ideally your ceiling would be very high for this setup to give space for the light to diffuse from hanging ceiling reflectors, which is unfortunately not the case for most homes. For realistic homes, there is a style of floor lamp termed 'torchiere' that may work, though for best results you're going to want to find one with a large, fully reflective shroud - lights this bright are pretty harsh if they're not diffused well.

Set up something like this on an automatic timer to simulate the sky, and I suspect a lot of our sleep issues would go away fairly quickly.

6
julianpye 5 hours ago 1 reply      
As a teen on weekends I worked a lot on filmsets as assistant to a lighting cameraman. Switching HMI lamps on was really amazing, the mood of the set changed in an instant. It was quite energetic, so I completely get what the author is saying. They usually were positioned outside of windows, allowing for a stable consistent sunlight simulation across scenes.
7
toomanybeersies 6 hours ago 1 reply      
Interesting that they're in Northern California and get Seasonal Affective Disorder.

I live at a similar latitude (except in the Southern Hemisphere) and have never considered that it would be a thing outside of essentially polar latitudes.

Over the past couple of winters (I moved further south a few years back) I've had some atrocious winters in terms of mental space. Maybe this is something I should look into.

8
hga 57 minutes ago 0 replies      
I address this by sticking this light http://www.amazon.com/gp/product/B002WTCHLC 9 inches from my face instead of the prescribed 12 inches for 10,000 lumens, and use it for 40 instead of the normal 30 minutes/day. Inconvenient, but has made major improvements from the (still useful but not as much) Phillips 6x10 array of blue LEDs I started with at 12 inches 3*30 minutes a day. Also helps me wake up in the morning ^_^.

That said, this seems to be right to me, but I could well believe there are people who need 30,000 lumens and therefore have to get more creative.

9
atemerev 4 hours ago 1 reply      
Occasionally (3-5 times per winter) I go to the nearest tanning bed room in the nearby gym, choosing the bed with the least UV and the most brightness.

Yeah, I know, tanning beds are imported directly from Hell to give us skin cancer and stuff. But a sunny day in LA or Barcelona summer will give you much more UV than that. And it really helps with the aforementioned SAD.

10
kristiandupont 5 hours ago 3 replies      
I don't see any info about the spectrum produced by the bulbs he recommends. Color temperature is one thing, but that's equivalent to describing a chord with just the root key. I suspect that a spectrum as close to the sun as possible (including UV and IR) is better.
11
upofadown 46 minutes ago 0 replies      
There is some thought that the normal yellowing of the lens as people get older makes them less sensitive to the effect of blue light. If so, then older people need more light to get the same circadian effect as younger people. It would be interesting to know how old the writer is.
12
shin_lao 6 hours ago 4 replies      
I found the best to feel better during winter is to supplement my diet with D vitamins.
13
jrockway 1 hour ago 0 replies      
Isn't staring into your computer screen all the light you need? Everyone (even Apple now) says that the light screws up your sleep cycle. Wouldn't it help with SAD?
14
zeristor 4 hours ago 0 replies      
I thought that a large part of the positive response was due specifically to light in the blue part of the spectrum.Hence blue LED devices, ironically blue light should help you if you're feeling blue.I used a light meter app on my iPhone to measure the light outside in London on a sunny winter's noon, and that came in at 25 Mlux; suitable for an operating theatre apparently.
15
dalys 4 hours ago 0 replies      
Wouldn't it be better to set f.lux to a latitude of 0 degrees and a longitude of 0 degrees?

I live at 59N 18E so the opposite hemisphere would mean I currently would have daylight between 02:29 and 19:31. While 0, 0 gives you roughly 06:00 - 18:00.

16
dolguldur 2 hours ago 0 replies      
If you don't mind the process of finding something (or waiting for something to come up) on eBay too much, I recommend looking for used photo/film equipment out there.I cheaply sourced two photo/film lamps on eBay.One is a 500W LED lamp and somewhat blueish, the other is a 330W fluorescent, but of very high quality, it's a very convincing daylight tone (not at all greenish). Especially the latter makes for a very nice room light.Both were barely used, yet substantially cheaper than new.The advantage is that you can easily change their angle on one or two axes. I originally bought them for photography and then started liking them as regular room lights, only to later even realize that I basically got myself SAD lamps.
17
applecore 2 hours ago 1 reply      
You seem convinced your brain "slows down" after the equinox due to (the lack of) bright light, but it's probably just the placebo effect. Has anyone ever convincingly measured the effect of bright light therapy on mood and intelligence?
19
insickness 3 hours ago 0 replies      
He doesn't mention an important variable here: time. The less light you have, the more time you need to achieve the desired effect. I leave my lights on all day and they work to reduce my SAD.
20
peter303 3 hours ago 0 replies      
Colorado pot grower suppliers market LED grow lights that simulate sunlight. Conventional grow lighting is one the largest energy hogs in Colorado. LED alternatives consumea sixth of the energy.
21
lelf 3 hours ago 3 replies      
> Theres clinical evidence that ramping up the brightness gradually before you wake up (dawn simulation) is highly effective.

Can someone recommend a ready-made solution?

22
keithpeter 6 hours ago 3 replies      
I live at 52.5 degrees North. Cosine to the power 4 being what it is, my neighbours and I could be experiencing significantly lower levels of daylight intensity in all seasons than people who live at lower latitudes. In the summer, we will see similar or slightly more daylight integrated over the duration of the day because of the longer days.

The OA does not specify the latitude at which he(? assuming the OA is the site owner's work) is living.

23
coderdude 5 hours ago 0 replies      
Feeling bad because it's winter? Yee-ikes. Plant problems. I can't believe it's actually called SAD. Luckily summer will be here again and we can all be STOKED.
The benefits of static typing without static typing in Python pawelmhm.github.io
26 points by dante9999   ago   38 comments top 11
1
hibikir 1 hour ago 0 replies      
Optional typing like the one we see here is not uncommon in other dynamic languages: For instance, Clojure has core.typed and prismatic schema, which approach the problem in ways related to what the article shows.

However, while optional typing gives you some benefits over purely dynamic typing, the fact that it's all bolted-on causes a variety of problems. First, there's the fact that you'll have code with types interact with code without them. This eventually causes more trouble than it solves.

IMO, the biggest issue though is that what we really see from most of these systems is to add optional type systems that are comparable to very simple type systems, like Java's. But those strongly typed systems are not really that powerful! The real power of static typing doesn't come from being able to make sure we don't mix strings and integers, but in doing type checking for much more complex abstractions. Type systems like Scala's, or Haskell's. Creating an optional typing linter that looks at that high a level, and doesn't cause much of pain, is not something I've ever seen. Type inference with generics, existential types, higher kinded types, algebraic types. That's where the real value is, and where modern typed languages are.

Aiming optional typing at where typed languages were 20 years ago is not going to help bridge the gap. If anything, I think it makes it wider, because then fans of dynamic languages think they understand the position of proponents of type systems when, in fact, they are looking at a strawman from the past.

2
lispm 14 minutes ago 0 replies      
The SBCL Common Lisp compiler detects some of those type errors at compile time - especially for declared types:

 ; (> SECRET GUESS) ; ; caught WARNING: ; Derived type of GUESS is ; (VALUES STRING &OPTIONAL), ; conflicting with its asserted type ; REAL. ; See also: ; The SBCL Manual, Node "Handling of Types" ; ; compilation unit finished ; caught 1 WARNING condition ; printed 8 notes
Common Lisp allows optional type declarations. SBCL (this feature it has inherited from CMUCL) uses those as compile time type assertions.

3
incepted 1 hour ago 2 replies      
The main advantage of static typing to me is that it enables automatic refactorings.

Without types, it's impossible for tools to refactor your code safely without the supervision of a human. This leads to developers being afraid of refactoring and, ultimately, code bases rot and become huge piles of spaghetti that nobody wants to touch.

With a statically typed language, I'm never afraid to refactor whenever I see an opportunity to do so.

4
TazeTSchnitzel 48 minutes ago 1 reply      
Having type annotations that are ignored at runtime would cause problems when you interact with code without type annotations, no?

PHP has a (unfortunately quite limited) set of type annotations, but the interpreter actually enforces them.

5
mcguire 35 minutes ago 0 replies      
What does mypy do if you make a call from code with type annotations into code without? Or vice-versa?

Has anyone ever gotten paid to add annotations to code that works?

Personally, I view type systems as like a safety line when doing work on a roof, and optional typing as having a line that might or might not be tied off.

6
hardwaresofton 27 minutes ago 1 reply      
Has anyone ever tried to implement optional typing in python with just generators?

It seems like a generator like @Signature(...input types, output type) would solve this problem with limited language changes, and would work in python 2/3?

7
tcopeland 2 hours ago 2 replies      
Here's a longer article (http://codon.com/consider-static-typing) with more background / history about static typing and Ruby.

This topic has been knocked around in Ruby-land for a while; I remember seeing Michael Edgar's LASER (https://github.com/michaeledgar/laser) static analysis tool including some work around optional type annotations, but seems like development there has stopped.

8
such_a_casual 6 minutes ago 0 replies      
Aren't type annotations in python just documentation that's designed to look like it's not? Seems like it would make more sense to just create a documentation standard than develop a brand new syntax for documentation. Another idea that makes more sense to me is to use @decorators. I really don't understand why this syntactic hack is supposed to be a good idea.
9
hahainternet 1 hour ago 1 reply      
> Probably this will still not be enough for Python enemies though.

Because it doesn't appear to actually do anything. It's a joke to call this 'static typing'.

10
qwer 1 hour ago 1 reply      
Since I unit-test the heck out of my code, this doesn't really do much for me. Unit-tests test actual values (which is where the interesting bugs come from IMO) and give me more powerful refactoring capabilities than an IDE.

The real benefit I'd be looking for is the chance to give the compiler hints to speed up execution times.

11
vinceguidry 2 hours ago 2 replies      
Ugh. I have hardly any type problems in my Ruby code. I've gotten very good at recognizing implicit state and capturing it in a properly instantiated object with a well-named class. I daresay that if you don't have this skill, a type system isn't going to help you much and you're going to get nasty bugs anyway.

The problem in Ruby is nils, and you'd have the same problem with the same solution in a static language; creation of a duck type. You can't get away from duck types, whether it's a maybe type or whether you perform nil-checking at the earliest possible opportunity. You learn with time and experience how to deal with inconsistent data. Ruby gives me the flexibility to do it without a lot of boilerplate.

Possible BGP hijack bgpstream.com
75 points by v4n4d1s   ago   17 comments top 5
1
aroch 4 hours ago 4 replies      
OMZ Global (AS34329) is an industrial process company (eg steel, manufacturing, ship building, etc). I'm going to assume someone who shouldn't have had access to BGP is trying to use it to block GoogleDNS or similar inside the corporation.

That or some pretty hilariously heavy-handed state-sponsored hijacking.

2
nmjohn 2 hours ago 0 replies      
Heads up this happened a few days ago, not currently hijacked.

Handy tool though, bookmarked it - using the event graph to display route changes as detected over time is a great visualization - would be really cool if there was the same event graph covering the entire internet (though I suspect without some cleverness in both design and implementation, the quantity of data would be prohibitively large for building a useful visualization).

3
ra1n85 2 hours ago 0 replies      
Am I reading this right that leak lasted 2 hours?

These are often the result of mistakes. Even if OMZ were a tier 1 provider in RU, the impact would still be limited - I can't see how this could be intentional.

4
zdw 1 hour ago 0 replies      
The BGPlay visualization on that page appears to be open sourced: https://github.com/MaxCam/BGPlay
5
joantune 3 hours ago 1 reply      
* Edited: Disregard this comment *

Ok, so if OMZ is Russian, do know this:

Russia currently is 'blocking' several websites. I.e. blocking at a DNS level, so this might be half witted attempt on keeping the censorship..

And as @swiley noted: some people only use the easier to remember 8.8.8.8 (I do that for instance)

A new way of rendering particles simppa.fi
45 points by plurby   ago   6 comments top 6
1
emcq 1 minute ago 0 replies      
I really like this. It would be nice to see some benchmarks versus a quad based solution where the quad is tessellated in a geometry shader with a fragment shader drawing the particle or using a texture. My guess would be that the bottleneck might be pcie but my knowledge of the gpu performance is a little outdated.
2
zeta0134 17 minutes ago 0 replies      
Seems like a very simple expansion on this technique would be to keep a rolling framebuffer, and blend the last particle draw with the current one. This would remove the flickering, display the fake alpha properly for still screenshots, and work for variable framerate.

Assuming your GPU will let you hold onto a previous frame's draw result (of just the particles) and blend it relatively cheaply with the new one, it wouldn't cost very much to implement. One full framebuffer blend should be much, much cheaper than blending the individual dots.

3
Danieru 19 minutes ago 0 replies      
If this sounds interesting to you I suggest also checking out TXAA. It exploits a similar concept to get high quality cheap AA.
4
BatFastard 33 minutes ago 0 replies      
I have been looking for a better particle. Thanks! One strange advantage to putting new feature off, better way of doing them suddenly appear.
5
lux 53 minutes ago 0 replies      
This is really cool. I'm curious how this will look on a Gear VR, since those run at 60fps and require every optimization they can get to maintain that on a cell phone.
6
deepnet 46 minutes ago 0 replies      
This brilliantly insightful heuristic will work well for showing off point clouds in WebGL.
War and Peace and WebGL wdobbie.com
38 points by wjd   ago   16 comments top 6
1
IvanK_net 44 minutes ago 0 replies      
This discussion reminds me times 4 years ago, when I showed my WebGL games to my firends and instead of getting an interesting feedback, 90% of comments were about my games not working on their devices. Sad to see the same thing in 2016 :(
2
exDM69 3 hours ago 4 replies      
I see all white pages with no text on them. "Show grid" shows yellow and green rectangles. Linux, Firefox 43, Nvidia proprietary drivers.
3
timon999 39 minutes ago 0 replies      
I'm getting some weird artifacts: http://i.imgur.com/U3sSkW3.png

Ubuntu Gnome, AMD open source driver

4
fla 2 hours ago 1 reply      
I'm curious about the choice of bmp for http://wdobbie.com/warandpeace/glyphs.bmp

Why not a png ?

Edit: So after a quick test:

* glyphs.bmp 53.5 MB

* glyphs.png 7.6 MB

I guess it could be shrinked even more.

5
arethuza 2 hours ago 3 replies      
Works fine, and is rather impressive, on Windows 10 - both in FF and Chrome.

Doesn't work on iPad - just get blank pages.

6
moron4hire 2 hours ago 0 replies      
It's almost completely unreadable on my Android phone. The character shapes are mostly there, but the triangles overlap in weird ways, like vertex order might be wrong it something.

https://www.dropbox.com/s/cvutftylbj77d8r/Screenshot_2016-01...

https://www.dropbox.com/s/u3qn9h5pgw2p671/Screenshot_2016-01...

https://www.dropbox.com/s/y4h93ycueykrjai/Screenshot_2016-01...

When doctors, psychologists, and drug makers can't rely on each other's research reason.com
32 points by nkurz   ago   12 comments top 3
1
Alex3917 2 hours ago 2 replies      
Why are people so obsessed with publication bias and statistical power? Those are just two of the dozens of reasons why published research is unreliable, and it's not even clear that they're the most important ones.

It seems like the skeptic community just randomly glommed onto those issues in the 90s or something and haven't updated their worldview since.

2
mhkool 1 hour ago 3 replies      
In double-blind tests, one deliberately changes only one variable and compares/observes two groups where individual do not know whether they take a new drug or a placebo. If the drug has a positive effect, the double-blind test is an accepted method of proof, but unfortunately this limits our options severely. If cancer can be cured by changing _two_ variables, we will never find a cure. NEVER! Remember that we already do research for 50+ years and have spent trillions, so I tend to think that Einstein was right: it is insane to repeat the same experiments and to expect a different outcome.

Let's jump to Alzheimer, a feared disease with also a lot of research and not so good drugs. Dr Bredesen has made a protocol with 35 variables to cure Alzheimer. And in his first test, he reversed Alzheimer in 9 out of 10 patients. The scientific purists say that there is no double-blind study, so no proof. I am convinced that Dr Bredesen in on the right path, not only because of the results but also because of his reasoning. The treatment has no drug, but a health optimization in all possible ways. And then the body heals itself in 4 months. Please do not comment with "that cannot be true" unless you proof that it cannot be true (I do not belief that you can provide proof).

So the state of current research methods is weak. Medical research only focuses on one variable, one drug, that will cure a disease. The results of this way of doing medical research are VERY disappointing. Trillions are spent and no drugs that cure cancer, AIDS or Alzheimer have been found. Dr Bredesen has chosen a different path and I support him.

3
jessaustin 1 hour ago 0 replies      
"HARKing" is a great coinage I had not seen before.
HAARP, Faroe Islands, Valencia City: Google Earths Classified Locations news.com.au
8 points by eplanit   ago   1 comment top
1
johngalt 10 minutes ago 0 replies      
AKA, list of locations to send an intelligence agent with binoculars. Seems like there would be a Streisand effect with this sort of thing.
Exploring Swift Dictionary's Implementation ankit.im
11 points by ingve   ago   1 comment top
1
mcguire 14 minutes ago 0 replies      
An interesting introduction to hash maps, with a fairly simple implementation.

For more fun, check out Python's implementation, which uses the remaining bits of the hash instead of linear probing; Robin Hood hashing, that rearranges entries to keep probing chains short; and the security issues caused by easily determined hashing.

Postgres Query Plan Visualization tatiyants.com
147 points by areski   ago   11 comments top 10
1
morenoh149 1 minute ago 0 replies      
for those whole couldn't get the anonymous cvs access working. I put the sample db on github https://github.com/morenoh149/postgresDBSamples/tree/master/...
2
valgog 5 hours ago 0 replies      
Wow, wonderful work!

As an additional information about already existing execution plan visualisation tools:

Depesz has written the classical PostgreSQL Execution Plan Visualiser years ago.

http://explain.depesz.com/

Of cause it is not so nicely pretty as the one from Tatiyants, but I use it now and then and it became a standard explain visualisation tool for many PostgreSQL users.

The table format from http://explain.depesz.com/ is very useful and one can understand a lot of details about your execution plan without the need to visualise in the form of a graph.

Also the default execution plan visualiser in pgAdmin looks really cool.

3
pmontra 6 hours ago 0 replies      
That page yields a 504 Gateway Time-out right now. Cached at http://webcache.googleusercontent.com/search?q=cache:jH9Wl8R...

The demo at http://tatiyants.com/pev/#/plans works.

Unfortunately it has some glitches.

Hints for using it:

* In psql use \o plan.txt to redirect the output of explain to a file. It will be a long output because you must use EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS, FORMAT JSON) and copying it from the terminal won't be fun.

* Remove the first two lines (header) and the last two (footer) leaving only the json data. Remove all the + characters at the end of every line. Suggestion to the author: the tool should handle that.

That said, it works. It really is much clearer than the output of explain in the terminal and as a result I'm googling how to speed up sort now. Thanks.

4
ris 1 hour ago 0 replies      
This looks very nice.

Something I've wanted in a postgres query plan visualizer is a "timeline" view of the various nodes. Seeing as for each node we get a "start time" and "duration" it seems like it should be possible to draw something a little like a flame graph to see at what points the nodes are doing their work.

5
banku_brougham 23 minutes ago 0 replies      
Thank you!!!! This will make my life easier and more fun.
6
janfoeh 6 hours ago 1 reply      
It seems to be getting hammered in the moment, so I can't give it a spin right now, but just from your explanation this looks fantastic.

For some reason I just cannot parse Postgres' query planner output, so this might help me understand EXPLAINs for the first time. Thanks for sharing!

7
orf 2 hours ago 0 replies      
This looks great, much better than the pgadmin output! A good feature would be an CLI tool we could pipe output to, i.e `psql some_long_query | pev`, which would bring back a URL we could open.
8
barrkel 55 minutes ago 0 replies      
The output from postgres explain is already in a very readable format when you compare it with the mishmash from mysql. A nice tree view all adding up.
9
dveeden2 5 hours ago 0 replies      
MySQL Workbench recently also got a visual explain option.This is very useful, don't know why everyone keeps struggling with a text format (especially for large explain plans).

https://www.mysql.com/common/images/products/mysql_wb_visual...

10
andrewvc 3 hours ago 0 replies      
Awesome! I used this the other day and it is great. The site seems to be down, I recommend bookmarking it and coming back!
State Management Problem erlang.org
22 points by joeyespo   ago   1 comment top
1
lectrick 1 hour ago 0 replies      
This is a quite beautiful post about engineering in general, actually. You try to solve a practical problem as best as you can, and you get sort of an unintended-beneficial-consequences effect and end up creating principles that have a more universally-applicable nature.

I personally hope Elixir helps thrust Erlang into ever more success. (But not so much success that it becomes a victim of it...)

Consider support for French sovereign operating system github.com
10 points by andrelaszlo   ago   2 comments top 2
1
WiseWeasel 0 minutes ago 0 replies      
This relates to a brief push for the development of such an OS back in 2014; it didn't go anywhere, and I'm not aware of any recent activity on this front.

http://www.rudebaguette.com/2014/06/06/digital-sovereignty-c...

It's a great use of Super DuPont imagery, however.

https://en.m.wikipedia.org/wiki/Superdupont

2
octatoan 10 minutes ago 0 replies      
> --cherchez-stackoverflow
Rss-puppy: A watchdog tool for monitoring RSS feeds github.com
30 points by ingve   ago   4 comments top 2
1
derFunk 1 hour ago 1 reply      
Looks interesting. I'm using automated RSS feed monitoring currently with IFTTT and its maker channel. The maker channel is calling my own PHP script when an entry has been added to a third party feed. It's a feed announcing new software releases which I then parse, download, and automatically install on a farm of 8 servers. RSS-puppy in this case could remove the dependency to IFTTT.
2
meunier 3 hours ago 1 reply      
This looks like it would integrate really well with newsbeuter. Having to refresh my full set of feeds is always what's kept me using a cloud-provided RSS reader.
Too Good to Be True: How More Evidence Can Decrease Confidence marginalrevolution.com
77 points by mhb   ago   25 comments top 9
1
muxme 35 minutes ago 0 replies      
I'm running into this problem on my startup website (http://muxme.com). I've found that the more evidence I provide that the site is real, the more people start to question it. I.e. I post pictures of the receipt of purchase, open source the raffle code script, have a live drawing, post the winner's usernames (which reveal quite a bit about them if you google them, I encourage them to change their usernames), post USPS tracking numbers, post my phone number and address, and everyone still thinks the site is a scam. Out of every company / website I've worked on, I've never had people so skeptical of giving a name, email address and phone number. I've made websites where I charged money and had better growth!

This is some real feedback I've received:

"It sounds like a scam to me. If you want to try to scam people by getting them to click on your fake website, you should have enough common sense not to use the website as your username. Better luck next time!!!!"

"Scamming people on Christmas Eve using someone's else's platform. Tacky."

2
Homunculiheaded 8 hours ago 4 replies      
There is a similarly interesting result in ET Jaynes "Probability The Logic of Science" (chapter 5). Where Jaynes demonstrates that increased evidence actually decreases your belief in a hypothesis.

Jaynes gives an example of an experiment in which a psychic predicts cards, getting n out of m correct. In classical hypothesis testing H0 would be "the psychic got lucky" and H1 is "the psychic has mystic powers". Jaynes first attempts to use Bayesian reasoning to work backward to determine your prior belief in psychics. That is, how much evidence would it take to convince you, compare that to the raw likelihood and what you have is your prior beliefs quantified.

He then points out that he personally would never believe the subject was psychic, this is because there are not just H0 and H1, but H2 "the psychic is deceiving the experimenters", H3 "the experimenters are making an error" , H4 "the experiments are fudging the results", H5 ... Each of these has their own prior.

If your prior for believing in psychics is low enough and your prior for "the experimenters are fraud" is high enough, the more extreme the evidence the more you will be convinced that the experimenters are disreputable con artists, and subsequently the less you will believe the subject is psychic.

This is actually Jaynes' solution to a huge problem with Bayesian reasoning as a form of human reasoning: "If more data should override a bad prior, then why in the 'age of information' does nobody argee on anything!" This example shows, according to Jaynes, that while we can certainly have irrational priors we can still explain human reason in Bayesian terms and still get a situation where two people faced with plentiful information will arrive at contradictory conclusions.

3
bsbechtel 6 hours ago 3 replies      
>>What Gunn et al. show is that the accumulation of consistent (non-noisy) evidence can reverse ones confidence surprisingly quickly.

I know this comment won't be accepted very well here on HN, but I think this is one of the reasons most people who don't believe in climate change don't. Science, and the subsequent reporting on it, has been consistently reporting for the past 20 years that climate change is real, and it's worse than we thought. Every few months, there is a new study being widely reported saying that X new evidence is proving that things will be much worse than we thought. If there was a study every once in a while that said X study about climate change was wrong, and we didn't consider Y factor, but climate change still seems to be headed in a dangerous direction, then I think people might be more willing to accept the thesis.

Of course, the paper doesn't suggest that we are wrong when we suspect systemic error, and skeptics of climate change may very well be wrong, but maybe this will help others understand their skepticism.

4
Lazare 3 hours ago 0 replies      
Another way of thinking about it is if your performance metrics, error logs, or unit tests are always 100% green/perfect with no failures or spikes or anything, then it's probably a pretty good sign that your metrics/logs/tests are broken.
5
bitwize 8 hours ago 2 replies      
One of the ways in which I can spot a scam is to look it up online. If all of the reviews of the product are glowingly positive (and use similar language), then I start looking for a pyramid scheme or something driving sales.
6
anotherhacker 5 hours ago 0 replies      
Mo' Data, Mo' Problems.

The New Coke flop is a perfect example of this.

New Coke was the result of the largest marketing / consumer research project ever. The conclusion of this research was to change Coke's formula. Coca-Cola Chairman Roberto Goizueta claimed that the decision was one of the easiest we have ever made. Coca-Cola thought this way for two reasons:

#1 Research was done with a sample of over 200,000 customers.

#2 Coca-Colas researchers triangulated the validity of their data with a mixed-method approach. They used focus groups, various surveys, and individual interviews.

All these data & research did the opposite of what they were supposed to. Their large sample size gave them lots of useless data. Data triangulationwhich was supposed to safe guard themdid the opposite: it convinced them that their useless data, were useful.

Why does this happen?

In an unnatural system, variance is unbounded. As your data set grows, the unbounded variance grows nonlinearly compared to the valid data. As variance increases, deviations grow larger, and happen more frequently. Spurious relationships grow much faster than authentic ones. The noise becomes the signal.

7
peter303 3 hours ago 1 reply      
Maybe later in this century we'll have the alternative of objective Trial By A.I. instead of by peers. Lawyers wont like this because they win by manipulating humans.
8
gtonic 7 hours ago 0 replies      
Also see the publications of Dr. Gerd Gigerenzer about this topic, e.g. in his book 'Gut Feelings Intelligence'.
9
SixSigma 8 hours ago 0 replies      
It's called equivocal data. It is known to management circles, it's the part of management that requires experience and judgement rather than facts. Computers can replace decisions based on unequivocal data.
Joint Pain, from the Gut (2015) theatlantic.com
22 points by bootload   ago   4 comments top 3
1
junto 2 hours ago 0 replies      
Helicobacter pylori is blamed for increasing the risk of stomach cancer. My gastroenterologist treated me to get rid of it for this reason.

That being said, since 50% of the world's population have this in their gut, I'm starting to get the feeling that modern medicine doesn't have the entire picture with relation to the microbiome and their are making drastic decisions based on half that story.

It seems increasingly likely that the delicate balance of bacteria inside the gut, or more specifically its imbalance, is closely linked to a variety of autoimmune diseases.

The long term consequences of antibiotics, highly processed foods and sugars, alongside the corn phenomenon is not understood and could well be changing is for the worse.

2
oniMaker 2 hours ago 1 reply      
"Finnish researchers found that a vegan diet changed the gut microbiome, and that this change was linked to an improvement in arthritis symptoms."

I had a suspicion when I started reading this article that the simplest solution would be to cut meat out of the diet.

It's astounding how many problems are solved medically, environmentally, economically, and of course ethically if we simply stop consuming meat, or at least stop consuming it at such extreme levels!

3
jmadsen 3 hours ago 0 replies      
There was a fascinating RadioLab episode on similar not too long ago.

There are indications, too, that these microbes may affect/control things like depression, bipolar, etc.

We may see some amazing medical breakthroughs in this area in the coming years

RiveScript A Simple Scripting Language for Chatbots rivescript.com
11 points by nikolay   ago   1 comment top
1
nikolay 2 hours ago 0 replies      
Here you can find some samples: https://www.rivescript.com/about
How the AI revolution was born in a Vancouver hotel financialpost.com
25 points by jonbaer   ago   7 comments top 4
1
ryporter 1 hour ago 1 reply      
This is an ignorant and hyperbolic characterization of recent developments in neural networks. In no way did most AI researchers think that it was "nuts" to learn using neural nets. Instead, neural networks was an approach that had fallen out of favor, and many considered other approaches to be more promising. However, I don't think that any serious researcher has ever called Geoffrey Hinton "nuts." He was very well respected in his field before this epic hotel meeting.
2
MikeNomad 3 hours ago 2 replies      
Title seems like click bait to me. Article is an interesting bit about Nural Networks in specific, not AI in general.

Also, not seeing any "revolution" going on.

3
samfisher83 2 hours ago 0 replies      
Warren Sturgis McCulloch one of the guys who invented Neural Networks was a doctor
4
hacker_9 4 hours ago 0 replies      
what 'revolution'?
Opam-ios: an OCaml compiler for iOS via opam, with example server code github.com
70 points by e_d_g_a_r   ago   18 comments top 6
1
loxs 7 hours ago 1 reply      
Impressive! It would be great if we also get an Android port... would be the first sane cross platform mobile framework.
2
edwintorok 3 hours ago 1 reply      
Have you encountered any problems with the App Store review process due to the application being written in OCaml?I thought they don't like applications that are not written in Objective-C much.
3
akhilcacharya 9 hours ago 2 replies      
Okay, I'll bite - why OCaml specifically? Aren't there other FP languages that cross compile to iOS, is there any advantage to using an ML?
4
lindig 9 hours ago 1 reply      
The server code in the example allocates a new buffer inside the server loop at every iteration. Is this intended?
5
e12e 8 hours ago 1 reply      
Nice. This still requires an OS X host? Or is it expected to work under eg: Linux too?
6
melling 6 hours ago 1 reply      
It seems that oCaml has been working on iOS for at least 4 years: https://news.ycombinator.com/item?id=3740173 yet you hardly hear about it.

Anyone care to discuss long term support? How brittle is it with iOS upgrades, for example?

Any Swift examples that I can include in my Swift Weekly page?

http://www.h4labs.com/dev/ios/swift.html?week=0

All Models of Machine Learning Have Flaws (2007) hunch.net
28 points by walterbell   ago   7 comments top 2
1
peter303 3 hours ago 3 replies      
The ressurgence of AI is just as hyped as expert systems and logic computers in the 1980. I just hope the inevitable AI bubble crash is not as damaging as it was then.
2
joantune 3 hours ago 1 reply      
Ten arrested in Netherlands over Bitcoin money-laundering allegations theguardian.com
30 points by edward   ago   19 comments top 6
1
yason 6 hours ago 4 replies      
It sounds to me that Bitcoin is a lousy medium for money laundering. It's unclear to me whether they laundered money by supposedly buying and selling bitcoins or converted shadowy assets into BTC and then back to physical currency, but in both cases the transactions can be tracked back in the blockchain so you'll not only have to make a good story of where the money came from but also, if and when the authorities get interested, make sure that how the BTC is moved around reflects that story.

But merely converting BTC to cash via ATMs doesn't even sound like laundering to me at all, just conversion. You would still need to explain the cash you've acquired if you intend to buy something with it.

Money laundering itself is not for those in a hurry. To plausibly launder money you need a service business with costs that are rather fixed and customers who can plausibly pay with cash. Then you can have as many "customers" as you like and nobody can really tell how many you really might have had but depending on how you set up the scheme you can report a reasonable income over the months and years, pay the taxes and thus turn the money legally earned.

2
patrickaljord 2 hours ago 0 replies      
They were arrested for drug traffic, not bitcoin money-laundering.

> The alarm had been raised by banks which had seen large sums of money being deposited before being immediately withdrawn at cashpoints.

Not the smartest individuals here. How did they think they wouldn't get caught? Seems like another anti-bitcoin peace article which ends with the mt gox failure and this gem: "Bitcoins reputation was also damaged when US authorities seized funds as part of an investigation into the online black market Silk Road.". So that means the dollar and euro's reputation are damaged every day when funds are seized as part of 99.9% of criminal investigations?

3
ryanlol 5 hours ago 0 replies      
Is there any articles that clarify what these guys were actually doing to make the money?

The fact that they don't mention it makes it sound like they might have been doing something more interesting than just selling drugs, as usually they tend to be fairly public about those cases.

Edit: Guess it was just some drug dealers then,

>All suspects fell into one of two categories criminal traders on the Dark Web and bitcoin-cashers who are paid by these traders to exchange bitcoins for cash

4
branchless 3 hours ago 0 replies      
Should have done it thru London property, then the establishment get a cut.
5
at-fates-hands 1 hour ago 0 replies      
Interesting everybody pointing out that BTC is traced through the public block chain - but what about BTC tumblers/mixers that are designed specifically to anonymize BTC transactions?

I remember one Reddit thread where they chased some huge amount of BTC which were stolen and ended up losing them in one of these:

https://www.reddit.com/r/SheepMarketplace/comments/1rvlft/i_...

As far as I know the thief got away with it.

6
DonHopkins 3 hours ago 1 reply      
"Bitcoins reputation was also damaged when US authorities seized funds as part of an investigation into the online black market Silk Road."

If by "seized" they mean "stole". Notice how they spin the fact that US authorities were charged and prosecuted for STEALING bitcoins while investigating Silk Road into the fiction that it damaged the reputation of Bitcoin, but not US authorities.

MRI Scans of a person's brain github.com
130 points by shagunsodhani   ago   96 comments top 21
1
rogeryu 4 hours ago 2 replies      
Last year an MRI scan was made of my brain. This was part of a promotion research at the university. All I got back was a lousy screenshot of my brain. ;-)

No but seriously - it was a big shock to see my brain in a picture. It is a really weird experience, similar when I saw myself for the first time on film. Maybe this is a normal experience for the current generation, but I hadn't seen myself on film before age 15. You have an image how you move, what you look like, but then you see this on film, and it's a total shocker. All these small movements you make, typical for you, and everybody around you knows them, except for you. So everybody else sees nothing strange when viewing that movie, except you.

I've seen many MRI scans in films and tv series, although I can't remember one since then. From these I have a general picture of what a brain looks like. I've seen plastic 3D brain models. And then I see my brain and it's so different. It's clearly me, no doubt, but still... Totally weird!!!

2
trickyager 12 hours ago 5 replies      
Just as a general note, these images contain personally identifying information about the patient. This includes not just the patient's name, but also his age, address, phone number, and an extremely limited health history.
3
aantix 11 hours ago 3 replies      
Back in December of 2007 a nodule was detected on my lung. Not knowing what to think of it, I posted the images on my blog and recruited other med students to comment on the images. Here's the Way Back Machine link to the blog post.

https://web.archive.org/web/20081205032710/http://www.runfat...

4
anotheryou 7 hours ago 3 replies      
Anything cool one can do with this? I have my MRI + DTI and wonder what to do with it.

Mine looks like any brain to me (no tumors, yay!).

From the DTI I was able to calculate the white matter trajectory with slicer: http://screencast.com/t/yDYFJdL7D

(I was a little let down I couldn't follow the visual nerves, but as they cross there is no clear direction of diffusion that could be imaged I guess...)

5
schappim 11 hours ago 1 reply      
I've submitted a pull request: https://github.com/dcunited001/mri-scans/pulls
6
rdtsc 12 hours ago 2 replies      
Was wondering, in the general, is the patient always the copyright owner of this kind of data? Can they relicense it, publish it, do anything they want with it?

It seems like that would be the case, but there has been talk about patenting parts of the DNA in the past, and legally sometimes things work in un-expected ways.

7
cafebeen 2 hours ago 0 replies      
For those who are interested, the human connectome project is in the process of acquiring and publicly releasing a database of 1000+ MRIs collected from normal subjects:

http://www.humanconnectome.org/data/

8
meshko 12 hours ago 0 replies      
Does he accept pull requests?
9
sergers 12 hours ago 1 reply      
if you are interested in viewing patient scans, with the PHI anonymized but the rest of the metadata in tact, check out:

https://www.mypacs.net/mpv4/hss/casemanager

(30,380 cases. 187,945 images.)its a teaching/sharing site for radiologists where they can send the data from their PACS.this is the free public site which includes a basic web based dicom viewer to zoom/pan/window level etc

10
simonster 12 hours ago 0 replies      
Here's the "Colin 27" brain atlas, derived from 27 scans of Colin J. Holmes's brain: http://www.bic.mni.mcgill.ca/ServicesAtlases/Colin27

Here's a lot of data from Russ Poldrack, who scanned his brain and collected behavioral and metabolic data from himself very regularly over the course of a year: http://myconnectome.org/wp/data-sharing/

11
th0ma5 10 hours ago 1 reply      
Can I just get an MRI or CT scan without a medical reason and at a reasonable cost? Perhaps that is an absurd question in the United States.
12
ciroduran 5 hours ago 0 replies      
The McGill university also offers these MRI simulations, if you need brain scans for your visualisation/processing needs. http://brainweb.bic.mni.mcgill.ca/brainweb/

Very useful when I did my undergraduate thesis.

13
aculver 12 hours ago 1 reply      
Hey David, I didn't expect to see your brain on the front page of HN. Can you confirm for us that everything is OK? :-)
14
amelius 4 hours ago 0 replies      
This is nice. It would be even nicer if this was complemented by medical records, and genotyping data (e.g. from 23andme).
15
Razengan 9 hours ago 0 replies      
Not quiet the same, of course, but this reminds me of the game Soma, where [SPOILERS!] the protagonist has his brain fully mapped and "uploaded" as part of a treatment, and his scans are used as a template for A.I. research in the future, leading him to be "reborn" a century later in a different body.
16
VeilEm 13 hours ago 3 replies      
If we get better at this and are able to create detailed enough scans of a brain maybe we can recreate that person some day, including memories and everything.
17
martin1b 12 hours ago 4 replies      
Anyone recommend a viewer?
18
kendallpark 12 hours ago 2 replies      
This couldn't come at a more opportune time. I'm studying neuro this block and need me some brains to look at.
19
akerro 5 hours ago 0 replies      
I'm starting to be afraid that one day my brain, memories will be pushed on git :<
20
zump 12 hours ago 0 replies      
Now to apply compressive sensing!
21
devereaux 13 hours ago 3 replies      
With a usual CT scan, you can do a volume rendering. Play around with the opacity to show the skin - this will give you a 3D image of your face.

Example on http://1.bp.blogspot.com/-LuLV6F-Fp0o/TbcY7BUvpjI/AAAAAAAAAM...

With a usual MRI, unfortunately slices are too far apart to be able to stitch them together like this.

Hand powered drilling tools and machines (2010) lowtechmagazine.com
34 points by bane   ago   13 comments top 7
1
arethuza 3 hours ago 0 replies      
~20 years ago I knew a guy who had done programming with a hand drill.

He was maintaining an old mainframe that ran a steel plant and had to modify the boot sequence. Originally the device booted from paper tape but over the years/decades the paper tape had been replaced by a stout piece of leather. So to modify the boot code he had to resort to a hand drill!

2
jacquesm 4 hours ago 1 reply      
One very good reason for using a hand drilling tool (one without gearing) is when you drill in hardwoods and you want the hole wall to be wood rather than charcoal. This is a very important thing when drilling up the holes for pins in old pianos, if the hole walls burn during the drilling then the piano will not keep tune.

By using a hand tool you are purposefully limiting your speed (you may still need to stop in between to let the wood cool).

Hand drilling tools still have their place, the above is just one example of many like it. But for most purposes a motorized tool (electricity, air, gasoline) is more convenient and a lot faster.

3
gchpaco 5 hours ago 0 replies      
Bit braces (the C shaped tools) are also the finest Phillips screwdrivers available, I've found. The design makes it easy to exert a lot of force to prevent the bit from camming out, and the amount of torque one can generate is astounding. Braces are measured by the diameter of the rotation; most good ones are in the 8"-10" range, which is probably more appropriate for casual use. 14" and up were generally for driving big augers, and you need every bit of that torque. I have a 6" one that is cute and small but not actually all that useful due to jaw issues.
4
WalterBright 6 hours ago 1 reply      
I use a hand drill for quick holes in soft material, because it is faster than dealing with the electric drill, extension cord, etc.

Not mentioned are air powered drills. Those tend to be powerful and very small, meaning they work great in tight spaces.

5
furyg3 4 hours ago 1 reply      
The guy over at the primitive technology vlog on YouTube just added a neat video of himself making a cord drill and a pump drill. http://youtu.be/ZEl-Y1NvBVI
6
vlehto 2 hours ago 2 replies      
>Hand braces have remained in use ever since, although they can be difficult to find today.

As bushcraft enthusiast/scout located in a Nordic country, this would be ideal. During a week long camp you might not be able to charge your drill. And during the winter Li-ion batteries die. I've been searching for one with three jaw chuck for some years now. (So it would be compatible with modern drill bits.) I've found one geared drill which had four jaw chuck, didn't hold any bit. And some very old and rusty ones that require special bits.

Seriously sell me one. I'd be willing to pay 30e for one, and maybe more.

7
exDM69 4 hours ago 0 replies      
The brace and bit is still the tool of choice for fine woodworking and cabinet making. It's much more precise than using a power tool and it's easier to control the speed and torque to avoid tearout. It's also useful when removing the bulk of the waste for joinery such as dados or mortises, before finishing the job with a chisel or a router plane or such.

A drill press is better if you have one available and the workpiece is small enough to fit. I wouldn't choose a hand tool for bigger construction projects if there's a very large number of holes to be drilled or screws to be attached.

Addressing the Chilling Effect of Patent Damages mozilla.org
75 points by e15ctr0n   ago   28 comments top 8
1
jimrandomh 12 hours ago 2 replies      
This is about the triple-damages rule, which says that if you're found to infringe on a patent willingly then the damages are increased, while if you didn't know about it, they aren't. This, combined with a large number of very-low-quality ambiguous patents and the fact that the patent office's screening only creates a legal presumption of validity but not a practical one, means that everyone is strongly incentivized to avoid ever looking at anything the patent office publishes. Which makes things even worse, because the problem is hidden and the patent office can't draw on public knowledge to shoot down bad patents.
2
throwawaykf05 42 minutes ago 0 replies      
Recent reports suggest the threat of treble damages is, statistically speaking, largely theoretical. Only about 0.6% of cases ever get enhanced damages, and even then the amounts are typically pretty low. Source: http://www.law360.com/articles/557734/the-truth-about-patent... Article may be paywalled - clicking via Google may help.
3
skywhopper 5 hours ago 0 replies      
The sanity of any laws built on top of the patent system require that granted patents make sense and represent actual "inventions". A patented invention should provide enough detail in the patent filing to allow other experts in the same field to recreate the "invention". For any sensible definition of a patent that could be related to software, such a filing would include actual source code (or at least very detailed pseudocode.

Alas, until issued patents actually represent real inventions, then all the rest of the laws surrounding their use and enforcement will appear utterly insane as well.

4
Animats 9 hours ago 3 replies      
Without treble damages, there's no incentive to license a patent. Worst case is paying roughly the same fee it takes to license.

The trouble with requiring "willfulness" is that it requires proving the state of mind of the infringer. This is difficult, and it's not even clear what "state of mind" means for a corporation. See in re Segate, where the CAFC tried to define "willful infringment" with an objective test.[1]

A stronger remedy than triple damages exists in patent law - an injunction against infringement. Some years ago, Polaroid won an injunction against Kodak for infringing their instant photography patents. Kodak was given 30 days to exit the instant photography business and had to buy all their cameras for that process back from consumers. They did, and that was the end of Kodak's instant photography business. The injunction remedy still exists, but is no longer routinely available since eBay vs. MercExchange. in 2006.[2]

Without patents, there's little incentive to innovate unless you can throw enough money at a startup to get dominant market share before someone else copies you. VCs used to want to see a strong intellectual property position before putting in money; then they had some assurance of not losing their investment even if the technology works. This has been less of an issue for non-technology startups; Doordash, etc. are not technology companies.

The case to which this brief is attached is not about software. It's about a new way to attach transformers to printed circuit boards with surface-mount soldering. Halo, a small startup, developed a way to do this which solved a problem with the solder joints cracking during heating. Pulse, a much bigger company whose transformers tended to crack loose after soldering, copied this and refused to pay royalties. Halo has won the infringement issue; the only remaining question is how much Pulse has to pay them.

[1] http://www.law360.com/articles/102863/seagate-the-issue-of-w...[2] http://www.bna.com/supreme-courts-ebay-n17179924841/

5
BatFastard 11 hours ago 2 replies      
Patent system is a great system, for 100 years ago.Throw it out, it would do more for innovation than all of the VC money invested over 10 years.
6
CuriousSkeptic 5 hours ago 0 replies      
"Designing around patents is, in fact, one of the ways in which the patent system works to the advantage of the public in promoting progress in the useful arts, its constitutional purpose."

To me that just sounds stupid. It's akin to the argument that breaking windows promtes growth by stimulating the glass industry.

7
boulos 9 hours ago 1 reply      
What an interesting group that banded together...

> Check Point Software Technologies, Inc., LinkedIn Corporation, Mozilla Corporation, Netflix, Inc., Pinterest, Inc., Roku, Inc. and Twitter, Inc. (Amici) are technology and Internet companies.

I understand why the media ones are together (MPEG, etc.) but I don't understand Checkpoint (LinkedIn as I remembered by writing this purchased Lynda.com, so it cares a lot about serving video). Can anyone explain?

8
rdtsc 11 hours ago 1 reply      
And interesting effect indeed. I can imagine someone seeing their company releasing some new technology. And then realizing there is an existing patent on that issued to another company.

What is that employee to do? If they say something, all of the sudden they 3x the liability their company would face if litigation happens. If they don't say anything, the product will continue to be released and it would increase the chance of the patent owner suing as well.

Controversial CRISPR history sets off an online firestorm statnews.com
13 points by tokenadult   ago   3 comments top
1
tokenadult 4 hours ago 1 reply      
Another article, "CRISPR controversy reveals how badly journals handle conflicts of interest," reports how differing accounts of how CRISPR gene-editing technology was developed illustrates the bigger problem of scientists publishing research papers when they have an undisclosed commercial interest in the research findings.

http://www.statnews.com/2016/01/21/crispr-conflicts-of-inter...

Touch with Leap Motion: WebGL demo using Leap Motion and signed distance field github.com
6 points by plurby   ago   1 comment top
1
dang 45 minutes ago 0 replies      
Url changed from http://www.edankwan.com/experiments/touch/, which points to this, which gives more background.
       cached 24 January 2016 17:02:02 GMT