hacker news with inline top comments    .. more ..    17 Mar 2017 News
home   ask   best   2 years ago   
1
De-Location Package: Keep Your Career and Live Beyond the Bay Area zapier.com
21 points by bryanh  21 minutes ago   2 comments top 2
1
pmiller2 1 minute ago 0 replies      
Sounds like a neat way for Zapier to take advantage of Bay Area talent without paying Bay Area salaries. This isn't really for me, because I don't like remote work and I like living in the Bay Area, but I think this is a brilliant move. However, I'd want to know what the pay scale was like before making this kind of move.
2
bryanh 2 minutes ago 0 replies      
Zapier CTO & co-founder here.

Much like Stripe's "hire a team" experiment - this is an experiment to pay people to "de-locate" from the Bay Area. Don't get us wrong, we absolutely love the Bay Area (I live here) but the cost of living is just outrageous for so many.

We're seeing a lot of candidates talking to Zapier (we're fully remote) about leaving the Bay Area to go "home" (some to start a family, some for other reasons) but want to stay in their tech career.

Happy to answer any questions, and I am sure there are a lot of Zapiens in the thread that could answer questions too.

2
In search of a simple consensus algorithm rystsov.info
111 points by justinjlynn  3 hours ago   35 comments top 13
1
cube2222 1 hour ago 2 replies      
I have the feeling like this article is just bashing strong master consensus protocols. But the truth is, yes, they infer a penalty for electing a master.

However, this really gets amortized in most workloads if the leader changes only rarely. Additionally, in an environment with a good network connection between nodes (a few ms), you can set the timeout to be much less than a few seconds (could be less than a second actually), this way you have shorter unavailability.

There's another point he touches, about the unavailability of the whole cluster when the leader is down. Really, this isn't something dependent on the protocols, but on the applications. If you have one paxos/raft group per replica, you actually only get a small unavailability. Additionally, even consistent reads do not need a living master to be possible.

It's worth reading the spanner paper to get more insight into high availability/consistency achieved with a paxos based db implementation ( https://static.googleusercontent.com/media/research.google.c... ).

EDIT: And in my opionion, calling it a SPOF is missing the point a little bit.

2
rescrv 13 minutes ago 0 replies      
Paxos's complexity is often overstated. I made a simple implementation of Paxos for a key-value store as a proof of concept to demonstrate how you can simplify Paxos by removing leader election, multi-master management, and log garbage collection.

Here's my blog post on the issue: http://hack.systems/2017/03/13/pocdb/

The entire implementation is 1100 lines of code including comments.

3
startupdiscuss 4 minutes ago 0 replies      
I am surprised by:

1. I am technically pretty strong but I have no idea what this paper is about

2. So many people know this is about that it shot up to #1 on HN

Can someone give a pointer (a link or two) to the lay, interested audience here about what the field IS. Just a sort of intro guide to someone who knows about programming and math, but has never heard the term praxos?

I am curious, and I am sure many others are as well.

4
irfansharif 2 hours ago 4 replies      
It is my understanding that the motivation in seeking out consensus algorithms with strong leaders (or equivalent) as opposed to horizontally weighted peer-to-peer ones is due to the performance penalty imposed by the latter in the general case. Structuring the protocol to be a dissemination from the 'leader' node down to the followers as opposed to a bottom-up approach fares better when your leader is long-lived, circumventing the cost of determining 'consensus' every single time. It's readily apparent that this would lend to a performance penalty in the pathological case, as is demonstrated here, when the leader node is taken down repeatedly - but I'm skeptical if this is true for workloads that systems like coreos/etcd, cockroachdb/cockroach were intended to handle.
5
hyperpape 14 minutes ago 0 replies      
Why test one store with wrk, and other with a js client? How do you know the load testing framework isn't skewing the results?
6
billsmithaustin 2 hours ago 0 replies      
Maybe Aphyr can test it out between whiteboard interviews.
7
agentultra 1 hour ago 0 replies      
I've been curious to know if one could model performance constraints in a model -- or at least the probabilistic bounds -- and have the checker invalidate a design that steps over them.
8
coldpie 11 minutes ago 0 replies      
Does every technical article need to be scattered with Impact font memes now?
9
felixgallo 1 hour ago 0 replies      
I've been implementing epaxos for a few years, slowly. It's decidedly not simple in recovery, but Iulian has been available and kind.
10
XorNot 2 hours ago 1 reply      
Interesting. I wonder if the etcd API would survive being implemented this way?
11
forgotpwtomain 2 hours ago 0 replies      
I didn't get anything out of this blog-post except a bunch of numbers based on the default parameters (such as leader election timeout) of various systems. Here is the link to the EPaxos paper it purports to discuss though:

https://www.cs.cmu.edu/~dga/papers/epaxos-sosp2013.pdf

12
kerkeslager 2 hours ago 1 reply      
Maybe I'm missing it because I'm on my phone, but where's the code?
13
partycoder 1 hour ago 1 reply      
I wonder what are the author thoughts on ZAB.
3
Lack of Oxford Comma Could Cost Maine Company Millions in Overtime Dispute nytimes.com
140 points by uyoakaoma  2 hours ago   103 comments top 23
1
gnicholas 37 minutes ago 2 replies      
The title of this HN post, like nearly every headline I've seen for this lawsuit, is misleading. The headlines makes it seem as if the company forgot an Oxford comma and is now losing millions as a result of having made a typographical error.

This is not the case. There is a statute that does not have a comma where one might be, and the litigants fought over whether the lack of a comma toward the end of a list ought to be read in light of the Oxford comma convention or not.

And it's actually more complicated, as the last item in the list is (potentially) compound, which makes it difficult to tell whether the "and" is to be attached to the final element alone, or to be a binding of the final element and previous elements.

Not that this its uncommon for articles to have misleading headlines, but I've been surprised at the extent to which nearly every article (save this one [1], by the ever-precise law professors at the Volokh Conspiracy) have misrepresented the case (or misunderstood it?) to make it seem like a company's typo led to a million dollar loss.

1: https://www.washingtonpost.com/news/volokh-conspiracy/wp/201...

2
mbillie1 46 minutes ago 4 replies      
Alternate headline: workers to receive appropriate compensation following accurate interpretation of state law.
3
weeksie 1 hour ago 2 replies      
I don't think I've ever heard an impassioned argument against the Oxford Comma. I mean, I have no problem with it, but there seems to be a belief that this is a less filling/tastes great holy war, but really, omitting the serial comma is fairly archaic at this point. At least in my experience. Sometimes lists need one, sometimes they don't. At this point in the evolution of our written language that should be a fairly unambiguous proposition.
4
rayiner 1 hour ago 6 replies      
The law in question excluded "canning, processing, preserving, freezing, drying, marketing, storing, packing for shipment or distribution of" agricultural products from the requirement for 1.5x overtime pay.

In a sane world where everyone used the Oxford comma, that sentence would be clear and the last item in the sequence would be "packing (for shipment or distribution)" and the drivers would be entitled to their overtime pay. But the Maine Legislature's drafting manual recommends omitting the Oxford comma so the above sentence is ambiguous. And it's really ambiguous because either meaning could really be what the Legislature intended.

5
verbatim 5 minutes ago 0 replies      
The longstanding question posed by Vampire Weekend has finally been answered: Oakhurst Dairy cares.
6
seanwilson 1 hour ago 5 replies      
In almost every case I see people arguing for an Oxford comma I see a sentence that should be broken up or restructured to make it less ambiguous. A sentence where not noticing a comma can drastically alter it's meaning (especially when it's a losing battle to make everyone understand the Oxford comma) is not a good sentence.
7
stupidcar 1 hour ago 1 reply      
"I'd like to thank my parents, Ayn Rand and God."
8
burntwater 48 minutes ago 2 replies      
My question is, what makes the canning industry so unique and special that it's deserving of exemptions from fair pay?
9
danbruc 35 minutes ago 0 replies      
Couldn't they just have looked up the intended meaning of the law? I mean laws or not made by just writing down the law in its final form, there are designs, discussions, and drafts. Wouldn't it be likely that there are documents showing the intended meaning more clearly, for example by listing the exceptions as bullet points? In my opinion that would provide a way better argument to settle the issue one way or another than appealing to grammar rules and style guides.
10
lotsofpulp 23 minutes ago 0 replies      
I wonder how Maine is able to exempt works from overtime, wouldn't that contradict federal overtime laws?

https://www.dol.gov/whd/overtime_pay.htm

11
madenine 40 minutes ago 0 replies      
Ignoring the comma issue, it still seems poorly worded.

"The canning, processing, preserving, freezing, drying, marketing, storing, packing for shipment or distribution of:"

Lets group related activities based on what workers might be doing/where they might be doing it.

If the law intended to exclude truck drivers from overtime, our groups could be:

- Canning, Processing, Preserving, Freezing, Drying (processing facility?)

- Marketing (?)

- Storing, Packing for Shipment (Warehouse)

- Distribution (On the road)

Why is one sentence trying to define rules for all those groups?

12
bmcusick 35 minutes ago 0 replies      
There should be an "or" before the word "packing" if "packing for shipment or distribution" is a single phrase (rather than two alternatives). The Oxford Comma debate is a sideshow. It's all about where you find the "or", which indicates the last choice.
13
kolbe 36 minutes ago 1 reply      
It's kind of amazing that we even write laws this way anymore. There isn't much to gain from freely allowing legislators to use the English language to dictate our system of rules. I'm sure there are much clearer templates that everyone should follow.
14
bandrami 54 minutes ago 0 replies      
The case was decided on equity, not grammar. What the comma style did was give the company a specious argument to make.
15
sixhobbits 29 minutes ago 0 replies      
There are two types of people this world: those who use oxford commas, those who don't and those who should.
16
PuffinBlue 1 hour ago 2 replies      
The article asks:

> Does the law intend to exempt the distribution of the three categories that follow, or does it mean to exempt packing for the shipping or distribution of them?

I'd say it does both. It exempts workers who package for shipment AND workers who distribute.

Discuss... :-)

17
hughlang 41 minutes ago 0 replies      
They could have done so much more with that first sentence. Example: "An appeals court ruling on Monday was a victory for truck drivers, punctuation pedants and nazis"
18
d--b 1 hour ago 0 replies      
Or: why AI is a long way to understand natural languages.
20
kaosjester 1 hour ago 1 reply      
The sheer amount of misused punctuation in this comments section seems to indicate a systematic problem.
21
bob_rad 2 hours ago 1 reply      
This should totally be a case study in grammar lessons. This comma could cost you millions kids, pay attention!
22
dredmorbius 54 minutes ago 0 replies      
Of possible relevance: The New York Times, per its internal style guide, eschews the Oxford Comma.

I smell a possible revolt on the part of Mr. Victor.

23
dmckeon 1 hour ago 0 replies      
Also posted to HN as:https://news.ycombinator.com/item?id=13886467https://news.ycombinator.com/item?id=13879156

For me the lesson here is "use unambigious language"rather than "{always|never} use an Oxford comma".

The 29-page decision shows that the "Oxford comma" is only partof the court's interpretation of the law, and shows thatthe court examined several paths to reach an interpretation.

http://cases.justia.com/federal/appellate-courts/ca1/16-1901...

4
If War Can Have Ethics, Wall Street Can, Too nytimes.com
29 points by teslacar  1 hour ago   16 comments top 6
1
sfard 11 minutes ago 4 replies      
Wallstreet has rules. They're just not enforced. People would be shocked if they knew, for instance, how many hedge funds simply operate on a model of "black edge" insider trading.
2
rallycarre 9 minutes ago 0 replies      
In war, there is a benefit to treating your enemy with dignity. Treatment of prisoners, morale("we are the good guys"), etc. In Wall Street there isn't with white collar crime only getting a slap on the risk when they put millions of people on the street.

The system is broken when corruption and misdirection is not punished with the weight of their crimes.

3
titraprutr 18 minutes ago 4 replies      
"Ethics of War" sounds like an oxymoron.
4
linkmotif 2 minutes ago 1 reply      
Bad premise war is a crime against humanity, or eh, should be.
5
Eridrus 22 minutes ago 0 replies      
This seems like a reasonable place to start a discussion, but hard to assess without real proposals. If I had to guess at what he is suggesting it seems to argue for all risk to be borne by the company, which really seems like an argument for less risk taking and more consolidation, not too surprising from a military man, but pretty anathema to technologists.
6
muninn_ 15 minutes ago 0 replies      
If Wall Street can have ethics, the US government can, too.
6
Enroute Airbus A380 wake flips Challenger business jet upside down flightservicebureau.org
505 points by mustpax  14 hours ago   288 comments top 22
1
phumbe 13 hours ago 5 replies      
For anyone interested, there are several great youtube videos[1-4] that show the wakes (the wingtip vortices in particular) created by various aircraft.

This incident brings the Reduced Vertical Separation Minima (RVSM)[5] into question. Strategic Lateral Offset Procedure (SLOP)[6] can be used used to avoid such incidents.

[1] https://www.youtube.com/watch?v=E1ESmvyAmOs

[2] https://www.youtube.com/watch?v=dfY5ZQDzC5s

[3] https://www.youtube.com/watch?v=uy0hgG2pkUs

[4] https://www.youtube.com/watch?v=KXlv16ETueU

[5] https://en.wikipedia.org/wiki/Reduced_vertical_separation_mi...

[6] https://en.wikipedia.org/wiki/Strategic_lateral_offset_proce...

2
lutorm 11 hours ago 6 replies      
This is a reminder that the announcement that you should "keep your seat belt fastened at all times when you're in your seat" isn't just something someone made up. It's a small price to pay in case you ever encounter severe turbulence en route, but it might be one of these things that most people don't realize until it happens to them. Hitting your head against the cabin ceiling can seriously put a damper on your trip.
3
uptownfunk 13 hours ago 2 replies      
Had to google around to understand wake turbulence. If I read the wiki correctly, it's basically a horizontal tornado that emanates from the wings. That would explain the rolling that the aircraft encountered.

On another note, I fly every week for work, can't imagine rolling five times and engines losing power with a drop of 10k feet. That's absolutely insane. I've had engines lose power before, but it was quickly regained such that the drop was more moderate.

4
jmts 10 hours ago 1 reply      
This is quite interesting. I've been living underneath a flight plan for the last few years, and every now and then I've noticed a strange noise that seems to come from the air a few seconds after a plane has passed overhead on approach. The best way I can describe it is as a tearing noise, somewhat like a long stretched version of a tear in paper, or fabric. I've long suspected that it may be due to turbulence, and this article certainly suggests that the turbulence for some aircraft is much more powerful than I'd suspected.
5
bfrog 41 minutes ago 0 replies      
Landing at O'Hare in a smaller bombardier 2x2 (crj 700 I think) jet following a jumbo was an interesting (white knuckle) ride I had in recent memory. The plane turned over lake michigan east of the city and felt like it went completely 90deg to the ground losing a good amount of altitude which I do not believe was the pilots intention. I definitely heard a few gasps, the stewardess curse and my wife grip my hand hard. Followed by 10 minutes of bumps, drops, and wing waggling and a landing that felt like we simply dropped 10 feet on to the tarmac.

Those little jets man... they're fine most of the time but what a spooky experience that was.

6
clueless123 13 hours ago 3 replies      
Big problem is the use of autopilot navigation locked to the GPS route.. this pretty much guarantees that you will fly right below the wake of the above plane on the same route! (Before GPS it will be very odd to flight exactly the same path) ..
7
foliveira 6 hours ago 2 replies      
Good video of the same (similar?) effect on a small plane after taking off after an Antonov-2

http://www.youtube.com/watch?v=KXlv16ETueU

8
arrty88 11 minutes ago 0 replies      
this is horrifying. does anyone have a sketch of how many flips the jet underwent before regaining control?
9
jacquesm 13 hours ago 2 replies      
The wakes of large aircraft are such that even big passenger planes make sure to keep plenty of separation.

https://en.wikipedia.org/wiki/American_Airlines_Flight_587

Is another example (though pilot error likely made a bad situation worse in that particular incident).

10
gingerbread-man 2 hours ago 1 reply      
Right now, the vertical separation minima for aircraft worldwide are not conditional on size. At altitudes above 28,000ft MSL, the minimum vertical separation is 1000ft, even for superjumbos. It is probably a rare event that two aircraft come so close both vertically and horizontally, but I wonder if there will be a rule change because of this incident nonetheless.

By contrast, the horizontal separation minima vary dramatically based on the size of the leading aircraft.

11
jseip 13 hours ago 2 replies      
The physics of this are insane. Then again, by my calculation the A380 displaces roughly 11 million cubic feet of air per second at the max landing weight.
12
peteretep 12 hours ago 0 replies      
I've been on a widebody caught in an A380 wake before. Some seriously violent turbulence for a few seconds, and the pilot came on afterwards to tell us what happened.
13
gigatexal 9 hours ago 0 replies      
Holy smokes that must have been terrifying. Glad nobody died. I hate flying as it is (not that I'm likely to ever find myself in a private jet anytime soon).
14
Animats 10 hours ago 1 reply      
Wow. Uncontrolled roll at least 3 times, maybe 5. G-loads severe enough to damage the airframe beyond repair. And this was a 9-passenger bizjet, not a tiny light single.
15
dbalan 6 hours ago 0 replies      
IMO this has slightly better info: https://avherald.com/h?article=4a5e80f3&opt=0
16
elberto34 2 hours ago 5 replies      
I wonder the private plane had to be written off...was the damage that bad? It's like the plane broke apart ..maybe the probems were mostly internal
17
nsgoetz 13 hours ago 2 replies      
I don't know much about aviation, but 1000ft seems kinda close to pass another aircraft. How normal is that distance?
18
paulannesley 13 hours ago 8 replies      
> one thousand feet above

Are the imperial units of measurement an aviation thing, or an American thing, or a bit of both?

19
nunez 11 hours ago 2 replies      
I thought controllers space aircraft based on their takeoff weight for this reason. Is this true?

A380s are HUGE, so this isn't surprising. wake turbulence is a killer

20
lutusp 8 hours ago 0 replies      
Pilots should be aware of wake turbulence and wingtip vortices in particular, and should be aware that being at a lower altitude than the generating aircraft is the most dangerous position (the vortex pattern is denser than the surrounding air and therefore descends).

Also, an invisible vortex is no less turbulent for its invisibility.

21
gadders 4 hours ago 0 replies      
That was some flying by the Challenger pilots (to my layman's eyes at least). I'm glad I wasn't on that flight.
22
EGreg 9 hours ago 2 replies      
"until the crew was able to recover the aircraft exercising raw muscle force"

Any more info on that??

7
Faster 3D rendering with WebGL 2.0 chromium.org
126 points by DanielRibeiro  8 hours ago   42 comments top 8
1
roschdal 6 hours ago 0 replies      
Developers are welcome to help update Freeciv WebGL 3D to WebGL 2.0. https://play.freeciv.org
2
NickBusey 13 minutes ago 0 replies      
The fact that the Chromium blog requires JS to display some text and one image is kind of ridiculous.
3
mixedbit 5 hours ago 4 replies      
According to webglstats.com WebGL is supported on 96% of browsers/devices. I wonder what is a maximum possible coverage to be achieved by WebGL2 with current hardware (If I understand correctly some WebGL capable devices will never be able to support WebGL2 due to hardware limitations).
4
andrew3726 4 hours ago 0 replies      
Relevant Talk from GDC17:"Reach the Largest Gaming Platform of All: The Web. WebGL, WebVR and glTF" [0]

[0] https://www.youtube.com/watch?v=jLNtjujPhzY

6
ionwake 2 hours ago 1 reply      
Sorry for the ignorance but :

1- will devices remain backwards compatible with webgl 1 ?

2- will three.js be able to easily update to this version ?

7
willvarfar 6 hours ago 1 reply      
I liked the After the Flood https://playcanv.as/e/p/44MRmJRU/ linked to in the post, but was disappointed by the lack of shadows cast by the leaves and that blowing leaves intersected and passed through each other. Its still very distinctly computer generated.
8
uegwqhugehoq 3 hours ago 1 reply      
does WebGL 2 have geometry shaders ?
8
High quality graphics for Latex documents: The Matlab route io.ac.nz
60 points by JohnHammersley  5 hours ago   47 comments top 10
1
leephillips 3 hours ago 3 replies      
Gnuplot is designed to work with LaTeX. If you want a unified appearance, with textual elements on the graph typeset by TeX, then that workflow is built-in to gnuplot.

You can also use TikZ to call gnuplot from within the LaTeX document, or generate TikZ code from within gnuplot.

A very brief introduction:

https://lwn.net/Articles/628537/

2
lottin 40 minutes ago 0 replies      
R can output Tikz code. Probably the best option for statistics-related work.

http://www.texample.net/tikz/examples/tikzdevice-demo/

3
radarsat1 4 hours ago 0 replies      
I've historically just used matplotlib and it's TeX-based text renderer for labels. Output PDF, includegraphics in a latex document. Works fine, text looks great. http://matplotlib.org/users/usetex.html
4
widdma 2 hours ago 0 replies      
I found matlabl2tikz to be superior to plot2latex. I did my honours thesis with plot2LaTeX+inkscape and my PhD one with a matlab2tikz, so I've had a fair experience with both.

I have created some high quality, complex plots out through matlabl2tikz with no problem. I haven't had the issues the author notes and I'd suggest they file a bug report. The m2t community is active and friendly.

The other bonus of m2t is you can edit the files in PGFplots, which is a powerful and complete language/library. For my use case, it was important that my plots be repeatable, so manually doing it with Inkscape would be tedious. Having easily editable source code allowed me to write scripts to filter the output to what I needed.

The one problem I did find is that pgfplots can be quite slow. This isn't m2t's fault though and can be overcome with TikZ's cache system.

Disclosure: I wrote the path simplification in matlab2tikz, but nothing else.

5
jacobolus 4 hours ago 4 replies      
> But your nice professional looking documents can be ruined because you insert the wrong type of figures.

Example A of the wrong type of figure for any possible context: 3-dimensional pie charts.

6
Bedon292 1 hour ago 1 reply      
I have taken to using PowerPoint to create the actual charts. Edit them and make them look pretty. Then export as EMF, convert to EPS (https://www.cs.bu.edu/~reyzin/pictips.html) and use the EPS in the Tex document.

Its a very manual process though, but I think the graphics that come out of it seem to be nicer than Matplotlib, and Matlab can do. Am I crazy, or should I really consider something like this?

7
amelius 3 hours ago 1 reply      
Maybe it's just me, but why can't Matlab just use the Latex fonts in an EPS, and output that EPS as a single file? That sounds like a more robust and less convoluted solution.
8
moultano 1 hour ago 1 reply      
Does anyone know of an equivalent solution for R/ggplot? I feel sad labelling my plots with R's fonts and equation formatting, but haven't found an alternative.
9
taliesinb 1 hour ago 0 replies      
In Mathematica, Export[expr,"PDF"], Export[expr, "SVG"], and Export[expr,"LaTeX"] for math expressions.
10
partycoder 1 hour ago 0 replies      
I wonder if there is a similar guide for Scilab.
9
Netflix Replacing Star Ratings With Thumbs Ups and Thumbs Down variety.com
117 points by gerosan  2 hours ago   131 comments top 37
1
pizzetta 1 hour ago 6 replies      
As I posted in the dupe:

This saddens me.

>Users would rate documentaries with 5 stars, and silly movies with just 3 stars, but still watch silly movies more often than those high-rated documentaries

That's not incongruous to me. The stars are not about "enjoyment" factor, they are about perceived quality. I may have a go to cheap ice-cream and rate it 3-stars but rate a good affogato 5-stars and only have it once in a while.

They are diluting the meaning of quality and instead are opting for a saccharine "enjoyment" factor.This binary choice does not sit well with me and I hope they abandon the idea soon.

2
wj 3 minutes ago 0 replies      
I'm still disappointed they removed their friends feature. I valued my friends' ratings more than the ratings of the Netflix user base as a whole. I have noticed that Netflix tends to be more accurate with the "users like you" star rating than the whole user base ratings.

I no longer see a sum of my ratings but I believe it is well over 2,500. I know others that have rated a lot more (when there was the friends feature you could see your friends' number of ratings). Because of my number of ratings I felt that Netflix did a pretty good job with recommending me content. I will be sad to see this go as I definitely refer to star ratings when adding content to my queue.

Others have mentioned that the two factors of quality and enjoyment make the star rating more valuable and I agree. The only time I remember it breaking down for me was the film Rachel Getting Married (though I am sure there were others). I couldn't stand Anne Hathaway's character to the point that I gave it one star but at the same time I recognized that she gave a really strong performance in what was probably a good film.

Are they converting ratings to thumbs up/down? What does a three-star rating convert to? Those are typically movies that I enjoyed but wouldn't rewatch or recommend to others.

3
yladiz 1 hour ago 1 reply      
I feel like star ratings, if just given in general, are pretty useless for anything that has multiple qualities. For example, maybe I like the cinematography of a movie but I think the acting is shit, or maybe I think the taste of the beer is nice but it's too acidic or sour and wouldn't work with specific foods because it's overpowering.

Airbnb does this pretty much spot on, in that they ask for 1-5 stars for five categories, rather than just "did you enjoy this Airbnb?" I also think that Untappd does this wrong, because in general beers can't be described in just a 1-5 star rating without deeper explanation.

I think that moving from a pure star rating to a thumbs up and down rating is better overall, if only because it makes me, as a watcher, not have to think as much and therefore give a rating where I might not have before. If I want to go more in depth, I can explain more too.

4
ykler 2 hours ago 4 replies      
From the linked Variety article, "However, over time, Netflix realized that explicit star ratings were less relevant than other signals. Users would rate documentaries with 5 stars, and silly movies with just 3 stars, but still watch silly movies more often than those high-rated documentaries."

But the signal isn't just for Netflix; it is also for users, who might sometimes be in the mood for something silly and sometimes in the mood for something good. Also, people might rather get more suggestions of good movies even if they are more likely to watch bad ones. (Of course, people might also just overrate documentaries.)

5
wvh 1 hour ago 2 replies      
Is this just part of the trend to dumb everything down to a polarising dichotomy? Because the trend to stop people from having a more complex and rational opinion than just picking sides doesn't seem to exactly improve discourse or debate, or inform and educate...

Wouldn't it be more beneficial to have fewer people give a more detailed review to describe what they like and dislike about something? What use is "good" or "meh" compared to for example "I rate this product low because the shoes are very narrow for my normal feet" so you can actual relate it to what you actually are looking for? One person's "meh" could be another's "good"...

From the article:

"This makes sense giving a five-star rating takes some thought, especially for something like a movie or TV show.

A binary yes or no option is much easier for viewers to commit to, [...]

6
kartickv 2 hours ago 4 replies      
If you have many reviewers, you don't need to get more information than thumbs up or down from each person. How much people thumbed it up is a good indicator.

Star-rating can be too much detail, anyway: if you're comparing two shows, and one has a higher fraction of five-star ratings, but the other has a higher fraction of ratings that are four or above, which is better? Star ratings can be too much detail and cause confusion.

If you want more detail from each person, you can ask specific questions with a yes/no answer, like, "Were parts of it boring?" or "Was it violent?" That's probably better than star ratings.

7
mr_tristan 17 minutes ago 0 replies      
At no point do I recall Netflix promoting this rating system as anything but way to get better matches. The fact that so many comments are about losing a system for criticism, just confirms that it was the wrong metaphor to use in the first place.

Perhaps they could bring back a "critical review" mechanism, but I'd guess it wouldn't affect your matches at all. And probably be mostly unusable on a TV anyway.

8
jerf 1 hour ago 2 replies      
I think a lot of people are reacting here as if Netflix is moving from the only signal they have being the star and review system to the only signal they have being the up/down system, and thus confused as to why Netflix would throw away the vast majority of their rating ability.

But that's not even close to true. In addition to it technically being up/down/no rating, we've got how long people watch the show for, how the shows they watch form a pattern, how all the patterns of what people watch show global preference patterns, whether they rewatch a show, when they watch what sort of shows, what individual scenes are rewatched vs. skipped... Netflix is swimming in a sea of preference data, not sitting here trying to figure out "Gosh, um, if the user likes this movie 3 vs. 4 stars, uh... what does that mean?"

It makes perfect sense to me to optimize this one-data-stream-among-many to increase user participation and get more bits of information from more people engaging with the simpler system, rather than trying to squeeze bits out of the few people willing to use the star system and the even fewer willing to write useful reviews.

It isn't really even as shocking as it may seem at first. The star system has 6 states, "no rating", 1, 2, 3, 4, and 5 stars. That's 2.6 bits, with some simplifying assumptions [1]. The thumb system has up, down, and no rating; that's 1.6 bits [1]. To make up for the bits, you need only see ~40% increase in participation over the current star system... think that's going to happen?

[1]: The simplifying assumption is that all outcomes are equally likely, but that's not true. I don't have the numbers to run a more complete information theory analysis, but it's not hard to imagine the "no star rating" case is so common that it produces such a small fractional bit that a higher-participation-rate yea/nay/no rating (if you puth this UI in their face, "no rating" becomes much more meaningful, too) straight-up produces more bits of information on average, and is thus simply an improvement even before considering the superior UI experience. I rather suspect this is the case, some very sensible assumptions would suggest this, but I lack the ability to prove this; the "assume all outcomes are equally likely" is at least a concrete case I can discuss.

9
shdon 13 minutes ago 0 replies      
I get why they do this and it'll probably work out fine, but for me personally, it's going to mean I will rate a lot fewer movies/shows. In the past, I've rated pretty much every single thing I've ever watched on Netflix. On YouTube, I hit the thumbs-up or thumbs-down buttons on a very small fraction of the videos that I watch. That's just because there is no middle ground (and no 2 or 4 star ratings either). There's plenty of stuff that I enjoyed, but didn't love, plenty of stuff that I disliked, but didn't really hate.
10
Tepix 2 hours ago 3 replies      
Bummer. I understand why they do it but I think it's too coarse. I'd much rather know how many people thought a movie was fantastic and not just decent.
11
AlexandrB 2 hours ago 4 replies      
> The streaming service said it had been testing thumbs up and down ratings with hundreds of thousands of members in 2016 and it led to 200% more ratings being given.

I don't understand why engagement is the right metric here. If someone isn't sure how they feel about a movie, why is it a benefit to have them spew their half-formed thoughts into a like/dislike rating?

12
frik 2 hours ago 3 replies      
What's next. Just a thumb up like on Facebook?

Rating is a hard problem. No system works universally well. IMDb (Amazon property) for example uses 10 stars, Amazon uses five stars. Facebook use thumb up ("likes"). Games ratings are often in percent 1-100% (summarized by metacritics.com and others). School systems around the world use A, B, etc or numbers like 1-5 or 1-6 for grades.

13
ggregoire 35 minutes ago 0 replies      
I wish we could have the IMDB/RT ratings directly in Netflix and a page with all the movies sorted by those ratings.

There are articles on Internet like "top 100 Netflix movies of the month" but it's almost always for the US Netflix.

14
widdma 2 hours ago 1 reply      
A problem I'd like to see a good solution to is selection bias. Unless there is some kind of reward, only people that really care are likely rate something

I think Netflix's move might help this. It certainly lowers the cost rating.

15
openasocket 41 minutes ago 0 replies      
I don't have particularly strong opinions about this change, I don't actually rate many things that I watch (I think the recommendation system can go mostly by what I watch).

I think the fundamental problem with different rating systems is that they are all one-dimensional. Maybe a two-category system, like 1-5 stars for "enjoyment" and another 1-5 stars for "quality". Or a 1-5 rating paired with a label, like "campy" or "serious", to prevent apples-and-oranges comparisons.

16
flaviojuvenal 2 hours ago 2 replies      
But the UX of the current rating system sucks. Rating stars are very small and you have to hover them to know you can rate. Only if you watch a movie to the very end you will see a clearer rating functionality, but even then you have few seconds to use.

I bet many new users don't even know they can rate. I wonder if people aren't rating much because the UX sucks, not because it's a 5-star system.

17
sergiotapia 31 minutes ago 1 reply      
Amy Schumer's comedy special bombed so hard they're changing their ranking system? Total failure.
18
andy_ppp 43 minutes ago 0 replies      
You could come up with an algorithm that instead of showing an average of 5 stars, biases the weighting towards people who rate things in a similar way to you. We all have an internal set of assumptions about what a 5 star rating system means to us; this is a case where the filter bubble leads to better understanding. People would then see the ratings they expected rather than a mix of different methodologies averaged?
19
danlindley 2 hours ago 3 replies      
I can't imagine any film that conveys a message, cinematography, style, and meaning that could be encapsulated with a simple thumbs up or thumbs down.

What a disappointing simplification.

20
DaveWalk 56 minutes ago 2 replies      
Question to the Machine Learning folk: are five-star ratings "better" than thumbs up/down, or is it just a matter of algorithm design?

I know ratings/prediction has long been studied by the MovieLens.org scientists.

21
Houshalter 54 minutes ago 0 replies      
Normalize star ratings. If something has an average rating of 4 stars, but four stars is in the lowest 10 percent, it should be given 1 star. This solves the classic issue with star ratings where anything under 5 stars is below average.

Personally I use star ratings badly myself. I only ever rate titles 5 stars. But that's because I mostly only watch stuff I am fairly sure I am going to like.

22
yitchelle 1 hour ago 1 reply      
I have always wanted to get ratings from a selected group of viewers who has similar tastes as me. For example, I like SciFi movies that involves planetary travel, ie Star Trek or similar style movies. If I can get ratings from folks with similar interests as well, then the rating system would make sense for me. Otherwise, it is still a hit and miss for me.
23
ed_balls 27 minutes ago 0 replies      
Netflix recommendations are quite bad.

You've seen this standup? How about watching it again a day later.

24
sgloutnikov 2 hours ago 0 replies      
I think it's a great move. Maybe on a similar note, one of the reasons I really like and find more use out of Foursquare is the way their rating system works compared to the five-star Yelp rating system. For example, in Yelp I found that all "good" restaurants are in the above 4-star rating, and that's all the information I can get out of that rating. With Foursquare, they have turned the upvote/downvote/neutral rating into a number between 1-10 that tells me more. Above average is 7+, something unique about the place 8+, truly exceptional 9+.

It just seems to me it's a simpler way to rate something by a user, and at the same time classify it more adequately.

25
adrianlmm 1 hour ago 0 replies      
Terrible idea, thumbs up or down rating only promote political reviews, a movie that makes critics of feminism? thumbs down despite the acting or contents inmediatly, a movie showing the good parts of religion? thumbs down by viceral atheists, some one just needs to look at youtube and see how this doesn't work,
26
chrisper 29 minutes ago 0 replies      
After almost 5 years I finally cancelled my Netflix subscription. I just realized I was watching more movies / shows on Amazon, Youtube, or any of the other paid ones more than Netflix. I used to enjoy it a lot more back then.
27
kochandy 57 minutes ago 0 replies      
Whenever I would browse the Netflix reviews it appeared most people voted either 5 stars or 1 star anyway.
28
nrki 1 hour ago 0 replies      
Should be "five star-rating"..."five-star rating" indicates they are replacing ratings of five stars :)
29
bjarneh 2 hours ago 2 replies      
What I really want to avoid is the mediocre content i.e. 2.0 - 3.5 stars, very good and very bad can both be entertaining..

Now that mediocre content will be scattered across the two categories I really care about; I guess IMDB/Rotten tomatoes will steer clear of this ridiculous binary rating systems.

30
kefka 2 hours ago 7 replies      
And not only that, but also removing their existing collections of videos and replacing it with their "Made by Netflix" shows that mostly amount to shovelware.

I'm about ready to drop them, just like they've dropped MASH, soon to be X-Files, and many many movies.

31
j_s 2 hours ago 1 reply      
The best way to rate a movie for me would be: when will I watch it again?

They kind of already know this though. (How many times have I watched a particular movie on Netflix?)

32
mrmondo 59 minutes ago 0 replies      
Life is not black and white, how you feel about something isn't terrible or brilliant every time. This is idiotic.
33
nailer 2 hours ago 0 replies      
YouTube used to have a star based ratings system too. I wonder if they drew the same conclusions.
34
muninn_ 2 hours ago 2 replies      
I wish there was a "meh" option. Sometimes things are ok, but not good or bad. This will leave me not rating a bunch of things unless they're good or bad.
35
sodapopcan 2 hours ago 0 replies      
Well, if no one else is gonna link it I may as well: https://xkcd.com/1098/

I'm all for this!

36
instaheat 2 hours ago 0 replies      
What does this say about their algorithm? Forget UX/UI.

I feel as though this devalues their recommendations to you.

37
PhasmaFelis 1 hour ago 1 reply      
I don't have Netflix, so I 'm not clear if the rating shown to users is changing. Or does it not show a rating, just use it to decide what movies to surface for you?

In any case, simplifying the ratings users can bestow makes a lot of sense, unfortunately. We like to imagine people carefully considering the merits and flaws like a professional reviewer, but the fact is that the vast majority of users only ever use two ratings anyway: if they like it, 5 stars; if they didn't, 1 star. There's maybe a third option where if they're ambivalent they just don't rate it.

So star ratings aren't actually very useful for evaluating products. This xkcd https://xkcd.com/1098/ made me realize that an Amazon product which has a 20% chance of exploding when you open the box, and otherwise works normally but unexceptionally, is gonna average out to a four-star rating for a product you really shouldn't buy. I always look at the one-star reviews before buying stuff now, no matter how few of them there are; knowing a thing's failure modes is much more useful than a bunch of praise.

10
Netlify CMS An open-source CMS for Git workflows netlifycms.org
217 points by corny  10 hours ago   50 comments top 16
1
bradgessler 8 hours ago 3 replies      
This will sound crazy, but I recently deployed a content management workflow for a rails app that mounts a WebDAV drive inside of a staging rails application that's running http://sitepress.cc/ (think Middleman in Rails, without all the dependencies)

If the marketing team wants to edit something, they mount the content drive, make changes using their text editor of choice, and are able to preview when they hit Save.

After they make a bunch of changes and add whatever media an engineer copies the changes off of the drive into a commit and deploys to production.

So far it's been working pretty well. The marketing and CS team are all able to edit markdown copy and run it easier through their content editing processes since it's just files. They don't need to learn or care about git. Designers can throw images in the volume without thinking too much about it.

Engineering is happy because they control their git history and commits. When it's time to optimize PNGs, compress assets, etc. the engineer can deal with and deploy to product with confidence.

We also run rspecs against every single page of content to enforce SEO, branding, etc.

Happy to answer questions or write something more detailed about it if others are interested.

2
benaiah 8 hours ago 5 replies      
I work on this project, and it's been a big day for us - Smashing Magazine just released a preview of its new site[0], which uses Netlify CMS as the admin interface. I'm happy to answer any questions about the project.

[0]: https://next.smashingmagazine.com/2017/03/a-little-surprise-...

3
polymath21 14 minutes ago 0 replies      
How does this compare to Contentful? I've been trying to combine their headless CMS with Middleman but it hasn't been that easy to use, although the experience I'm trying to create is non-standard (imagine a normal blog post but with cards inserted throughout). Plus, the cheapest paid plan is $249/month which is really hefty when compared to the competition. Their free tier does give you a good amount of features though and their admin UI is the best.

I was planning to deploy my site onto Netlify once done. Wondering if maybe your CMS is a better alternative to Contentful and I should just combine services.

4
gogopuppygogo 3 hours ago 0 replies      
I've been using Netlify for over a year to host static websites on custom domains and it's been impressive to me. Their control panel and tight git integration makes for a powerful combination. Not to mention that they also introduced a free tier for custom domains that I'm now using for some sites.

This is exciting to see them work on. I'm disappointed to see that they aren't focusing on designers though. Something like RespondCMS for Netlify hosting would be a superior combo.

5
laktek 8 hours ago 1 reply      
Pretty interesting and this is the first time I came across the term JAMStack (a concept I always had trouble in explaining to other developers).

Rendering content blocks in server and then further enhancing them on client-side using a JSON API can be how most sites will be built in future (especially, with ServiceWorkers we would be able to create more fine-grained offline experience too).

I'm also currently working on a project on similar lines, check out the initial blog post for more details - http://www.laktek.com/2016/11/29/introducing-pragma/

Edit: also loved the idea of a kanban board for edit-review-publish process. Might steal that idea ;)

6
trqx 7 hours ago 1 reply      
If I follow instructions in the "Try it out"[1] link and click on "Connect to github" I end up having to giving r/w access to ALL of my github repositories.

Also, reading the Step by Step guide[2] I'd end up giving r/w access to my public keys as well.

Now, I just wake up and maybe I missed something but documentation should make it clearer that you either have to create a dedicated github account, or how to manage permission in a more granular fashion.

[1] https://www.netlifycms.org/docs/test-drive/

[2] https://www.netlify.com/blog/2016/10/27/a-step-by-step-guide...

7
thenomad 5 hours ago 1 reply      
Looks very cool. Does this only work with Github?

Unfortunately that renders it pretty much useless for me (and probably other non-Github users) if so.

8
steffoz 9 hours ago 1 reply      
Thanks for citing DatoCMS, Netlify, appreciate it :)

Really love what these guys are trying to build: critical mass over static websites. That's what we need.

9
rogerjin12 9 hours ago 0 replies      
I work at buttercms.com and we are huge fans of Netlify and how easy they've made hosting static websites and SPAs. This CMS looks awesome and it is clever how it hooks into Github.
10
nodesocket 7 hours ago 1 reply      
I'm looking to create a back office for a consultancy company. Typical data like clients, users, billing, etc. It seems like Netlify CMS would allow me to create custom collections and data types correct? So instead of pages and posts, I'd could have clients, users, invoices, etc.
11
boondaburrah 8 hours ago 0 replies      
I was literally gearing up to build something almost exactly like this this week as my own hackathon project. I needed a blog and I don't wanna do static site generation on command line. I may still do it, but dang, way to take the wind out of my sails. :P
12
Myztiq 9 hours ago 1 reply      
Seems like a neat idea, I have not yet found a good example of the output that this generates on github.

I've used prismic.io and other such systems in the past, all wonderful. I'm glad to see this is evolving in other ways.

13
radiospiel 8 hours ago 1 reply      
sorry, what exactly does this? Assuming one is looking for a CMS, the description on the frontpage does not tell me anything. How does it compare to other CMSes? After "Find out more" I know now that it does something with Github, but what exactly? (Also I now know that it has something to do with a JAMstack, whatever that might be.)

I am looking around for a smallish CMS, but I got no idea from the introductory pages on whether or not netlify could help me

14
NKCSS 7 hours ago 1 reply      
Minor nitpick on the screencast; when clicking save, it took 4-6 seconds for it to complete and then load another page? That's not the best way to show of your CMS and instantly makes me fear what will happen if I have a lot of content in there.
15
therealmarv 4 hours ago 1 reply      
So this does not work with gitlab, right?
16
ranyefet 5 hours ago 0 replies      
Great idea, definitely going to try it out
11
How to Clear a Path Through 60 Feet of Snow, Japanese Style atlasobscura.com
228 points by dpflan  12 hours ago   58 comments top 21
2
chrissnell 10 hours ago 7 replies      
I would love to see that first bulldozer at work. I'm a little amazed because snow becomes incredibly heavy and hard to move/push when it's packed. As the blade moves forward, that dozer will quickly have to use incredible force to move forward.

I experienced this first-hand last summer when I was driving my old Land Rover across the country. I was solo, way up in the very remote Henry Mountains [1] of Utah, about to reach the crest and continue east. I came around the bend and there was a massive drift of snow covering the trail. [2] Foolishly, I thought I might be able to drive through, or at least ram-reverse-ram-reverse-ram my way through. I should have known better. I drove into the drift and my tires sank and the face of my front axle compacted the snow ahead and just stopped. The entire front of the truck was encased. It had been a warm day and was rapidly chilling in the evening light and the wet snow froze around my axle and my truck wouldn't move an inch. I spent the next hour digging out the front of the truck in fading light and by some magic, was able to free myself and back down the precarious cliffside road to a spot where I could turn around. Lesson learned: snow is really heavy.

[1] https://goo.gl/maps/XjyNapW3XRL2

[2] https://www.flickr.com/photos/defender90/27742972523/in/albu...

3
jonah 7 hours ago 0 replies      
If you want to see some impressive snow[1] in the US, visit Crater Lake National Park in Southern Oregon during the winter. Last time I was there, the snow cuts looked just like these (though not quite as tall.)

"The average snowfall at Crater Lake is 533 inches every year. That's about 44 feet. The greatest cumulative snowfall for one season was 879 inches (73 feet) the winter of 1932-33. The greatest depth on the ground at one time was 258 inches (21 feet) the winter of 1983. Most of the snow usually melts by the beginning of August, although after particularly heavy seasons, there are drifts that fail to melt before the snows return again in the early Fall."[2]

[1] https://www.google.com/search?q=crater+lake+snow&tbm=isch

[2] https://www.nps.gov/crla/faqs.htm

4
lb1lf 8 hours ago 1 reply      
We do have a few quite interesting mountaineous roads in Norway, too - matter of fact, a couple of minutes ago I heard on the radio that only two of the -hm, six? seven? I only ever use two of them- mountain crossings connecting eastern Norway to the rest of us were open due to lots of snow and poor weather in latter days. Those two are now convoy only.

Have a look at this, for instance:(0) Not as elegant as the Japanese operation, but the photo probably predates GPS, to the Norwegian snow plow crews credit. I'd feel very small traversing down that road. Doubly so if I suddenly met a heavy truck, a bus or anything larger than a bicycle, really...

I just stumbled upon a couple of nice photos of the current weather conditions at Haukeli, one of the passes currently closed. Just click the photo for the next one. (1)

(0) http://www.gibud.no/auction/APUserImages/2015B105661F1LTLXUH...

(1) https://www.nrk.no/hordaland/sa-vakkert-kan-det-vaere-a-vent...

5
Animats 10 hours ago 2 replies      
Clearing doesn't always work. Here are photos from Hokkaido of cars with people in them buried in snow drifts.[1] Six deaths. One train derailment.

Note the poles over the road with the downward pointing arrows. That shows where the road is, when it's buried.

[1] http://www.dailymail.co.uk/news/article-2287894/Japan-storm-...

6
bahmboo 1 hour ago 0 replies      
Mt Baker WA 1998/1999 season 1140 inches and that was just at 4200 feet at the resort. Road snow canyons is cool.
7
sushobhan 9 hours ago 0 replies      
This is a job and someone has to do it,- this comment from the article just made my day. Hats off, such an awesome attitude and dedication towards work.
8
chiph 2 hours ago 1 reply      
I'm looking at the vertical sides of the cut, and I'm really surprised it doesn't collapse.

https://en.wikipedia.org/wiki/Angle_of_repose

9
akg_67 11 hours ago 0 replies      
Very interesting. I wasn't familiar with Toyama. But I spent last December and January in Sapporo. People mentioned last year in December they had most snow in last 15 years. I was comparing my experience in Sapporo with living in Ottawa. Both places seem to have similar winter. I can see myself living in Sapporo in winter but not in Ottawa. The main difference was the snow ploughing of roads and sidewalks. It was much better in Sapporo than Ottawa. Also Sapporo has very good subway and underground walkways so your exposure to winter conditions is somewhat reduced.
10
jacquesm 10 hours ago 0 replies      
My first introduction to Canada was something like this. The winter I got there the first time was some of the worst weather in decades, the trip from Dorval airport to Montreal was through a canyon just like the one in the picture. The only thing sticking out of the snow was the tops of the lampposts.

1997/1998 was quite the introduction to the concept of 'winter' for me. In NL a harsh winter might see one night of -15 and the rest of the time a bit below or around freezing. That winter saw one of the worst ice storms in Canadian history, lots of electrical infrastructure went down under the weight of the ice, power grid pylons breaking off like match-sticks.

Another place where these snow canyons are made in winter is Alaska.

11
Turing_Machine 10 hours ago 1 reply      
"Syracuse, New York, often dubbed the snowiest city in the United States, receives, on average, 117 inches of snow a year. "

Not even close. Valdez, Alaska averages 305.8 inches per year.

12
pwarner 10 hours ago 1 reply      
Not nearly as deep, but the Sierra should have lots of snow this summer. Reference https://www.nps.gov/lavo/planyourvisit/fourth-of-july-weeken...
13
yomly 5 hours ago 1 reply      
Tateyama or Mt. Tate but never Mt. Tateyama
14
hkmurakami 10 hours ago 0 replies      
I was in a car to Niigata (near Toyama) a few years ago in february, and the walls were about 10 feet high.First time I'd ever seen them, and I was definitely in awe.

Btw Toyama has great water and consequently is home to some of the finest sake distilleries in the country.

15
ben_utzer 4 hours ago 0 replies      
youtube has some videos on how it's plowed on the alps.https://www.youtube.com/watch?v=IzzwEQF84WE
16
avenoir 10 hours ago 0 replies      
Logan Pass in National Glacier Park gets similar accumulations of snow. Thia is off topic, but I would highly recommend this drive in the summer. One of the most stunning roads in North America.
17
vasira 4 hours ago 0 replies      
This place is full of snow but hats off to those people clear the path !
18
Hydraulix989 8 hours ago 1 reply      
It's just like that one Mario Kart 64 level with the snow, except IRL. So that's where the Japanese level designers got their inspiration from!
19
TorKlingberg 6 hours ago 3 replies      
Does this website (atlasobscura) ever do anything other than repost viral stuff they copied from around the web?
20
bayesian_horse 8 hours ago 0 replies      
I expected someone wielding a Katana to hack it away...
21
frik 5 hours ago 1 reply      
Copy cat from Europe invention. This snow blower were invented many decades ago. The original vehicles from the 1930s are still in use in winter in the Alps - on mountain roads and trains.
12
Google wont be able to resist listening in on your conversations signalvnoise.com
39 points by braythwayt  1 hour ago   12 comments top 6
1
educar 1 minute ago 0 replies      
I have trumpeted this many times but selfhosted solutions like cloudron, sandstorm, arkos are the way to go. We want iot devices to send data to our personal servers and not to big brother companies for analysis and mining.

That said, in my experience, people willingly give their data to Google and Facebook. I assumed people were not cognisant but this is not the case. Most people are very conscious of their decision and argue that their data is safer with Google and it's a good trade off. This seems like a PR issue and hard to fix even if selfhosted solutions reach maturity.

Do others share my thoughts? Would love to hear from privacy enthusiasts as to how we can promote better architectures for the web.

2
frik 11 minutes ago 1 reply      
3
waqf 42 minutes ago 4 replies      
This is really missing the point, because if I objected to Google mining my private conversations to advertise to me I wouldn't use Gmail.

The problem with ads on Google Home this week is serious, but it isn't related to "Google spying on me" at all. It's to do with the fact that reading out ads in my home is super creepy, despite in fact, because of the fact that the ads aren't targeted to me.

4
blakesterz 27 minutes ago 0 replies      
That Verge article has 2 different statements, the first sounds like this was for sure an AD though they try like hell to make it sound like it's not an ad...

This isn't an ad; the beauty in the Assistant is that it invites our partners to be our guest and share their tales.

But then the second sounds like no, it wasn't so much an ad as something an algorithm dug up.

"Whats circulating online was a part of our My Day feature, where after providing helpful information about your day, we sometimes call out timely content."

Giving them the benefit of the doubt here, those 2 statements seem to describe things that can be different. One "came from a partner" the other was just "timely content".

5
tyingq 29 minutes ago 0 replies      
I'm sure they will find a way to introduce the ads, as well as a premium subscription option that removes them.
6
rdiddly 32 minutes ago 0 replies      
"Resist" might not be the operative term here. Advertising is the whole purpose for developing these devices and placing them in people's homes.

Edit: And though it's obvious I'll say it: at Google advertising means data gathering.

13
Learning to Communicate with Deep Multi-Agent Reinforcement Learning github.com
31 points by seycombi  5 hours ago   1 comment top
14
Lockheed Martin completes new battle laser for U.S. military cnbc.com
23 points by smaili  46 minutes ago   5 comments top 2
1
splitrocket 22 minutes ago 2 replies      
I wonder if it complies with the treaty banning blinding laser weapons: https://en.wikipedia.org/wiki/Protocol_on_Blinding_Laser_Wea...
2
sakopov 9 minutes ago 1 reply      
> A Patriot missile, usually priced at about $3 million was recently used to shoot down a $200 quadcopter drone, according to a US general.

I thought they were at most $50k a piece. This is unbelievable.

15
Broadband left out of infrastructure goals. How FCC wants to fix it washingtonpost.com
11 points by SmkyMt  2 hours ago   3 comments top
1
xf00ba7 1 hour ago 2 replies      
The question is....does the FCC really want to fix it?
16
A primer on designing better cameras for games [video] gamasutra.com
47 points by Red_Tarsius  5 hours ago   5 comments top 2
1
doomlaser 4 hours ago 0 replies      
The camera engineer for the PS3 game Journey gave a great talk on 3D third person camera systems a couple GDCs back that covers a lot of useful ground: https://www.youtube.com/watch?v=C7307qRmlMI&t=2432s
2
laurent123456 4 hours ago 3 replies      
See also Virtual Camera System [0]. One of the camera system that first impressed me was the one in Super Mario 64. It's so nicely designed that you'd often forgot it's doing anything - for example it would follow Mario while rotating around him so as to point in the direction of the current path. I was a big fan of the camera angles in Resident Evil 2 as well, they've created interesting sequences of shots that would built up tension without actually showing any direct threat.

[0] https://en.wikipedia.org/wiki/Virtual_camera_system

17
HTTPS Interception Weakens TLS Security us-cert.gov
176 points by mlindner  14 hours ago   76 comments top 12
1
omribahumi 9 minutes ago 0 replies      
This is what squid has been trying to solve with their server first bumping[0] and peek-and-splice[1]

[0] http://wiki.squid-cache.org/Features/BumpSslServerFirst

[1] http://wiki.squid-cache.org/Features/SslPeekAndSplice

edit: reference peek-and-slice too

2
mlindner 13 hours ago 5 replies      
The US government has basically declared "HTTPS/TLS Interception Considered Harmful". This is going to be interesting as all the major security load blanacer/appliances out there offer this as a standard service at this point.
3
Animats 9 hours ago 7 replies      
We need MITM detection in the browser.

Yes, it's possible. The crypto bits the host is sending are different from the crypto bits the client is receiving. There are several ways to compare those, despite what the MITM box is doing. Out of band channels, timing, and order of data can be used.

I sometimes refer to HTTPS Everywhere as "Security Theater Everywhere". Before the mania for HTTPS, many sites only used HTTPS only for crucial transactions such as logins and credit cards. Those were infrequent enough that they didn't have to go through a CDN. Now, with HTTPS Everywhere, there's no distinction between the stuff that has to be hidden from observers, and the stuff which only needs something like Subresource Integrity to make sure it hasn't been messed with. So now the secure channel over which credit card numbers and logins are passed is exposed at the CDN.

4
Keverw 11 hours ago 2 replies      
A while back I remember seeing on HN there was a issue with a certain vendor and ChromeBooks because Chrome used a newer TLS(And the mitm vendor vendor was noticed in advance too, and didn't update their product).

I wonder how schools and banks plan to react to this... Apparently financial firms have to record everything their employees do for some regulations.

To me, schools doing this sort of thing is wrong. I wouldn't be surprised if the principle would grab people's passwords and login to their accounts even. I know some schools even went as far to demand students hand over their passwords to social media when they report bullying... Which if the school blocks social networks anyways, I don't see how it's a school issue for what happens outside of school...

If this sort of thing really needs to be done, at-least people should be warned and aware they are being monitored. If it's for a bank and it's only company equipment everything is being monitored it seems a bit more okay to do if everyone is well aware. "You are only to use work computers for official business." sort of policy.

5
tofflos 9 hours ago 2 replies      
So how do I test if my workplace is doing a good job of this? The article mentions badssl.com. Do I just click all the red links in the certificate section and verify that my browser is refusing to display the pages?
6
tinus_hn 7 hours ago 0 replies      
And the answer is going to be custom unspecified encryption inside the encrypted channel.

HTTPS mitm proxying is a dumb idea that can't work right but is easy to sell to executives. It will be defeated until we're at the state of a few years ago, where the products that really matter, like banking, have tough security and the rest has less.

7
philbert 7 hours ago 2 replies      
I still find it unfortunately shallow analysis.

I'm currently fighting a battle in a company in the middle of rolling out Blue Coat ProxySG. I only became aware of it because it began causing interruptions to our work since none of the development tools get the necessary root cert to validate the certs that the proxy is rewriting. The root cert is only installed into the Windows credentials to make browsers work and it's left up to every client to fix whatever other problems they have.

One of the common arguments I've encountered from people is along the lines of "if the company trusts the security people, then I trust the security people". However this ignores the fact that the company has legally enforceable agreements in the form od employment contracts, NDAs, and what-have-you that protect the company against a breach of that trust.

But what kind of protection do you have as an individual if, for what ever reason, one of the security people decides to single you out while they have unsupervised access to you social media logins, personal email and credit cards numbers and financial details (and lets face the fact that everyone does all of these things to some degree on a work computer).

The answer is that you more than likely have no protection at all (likely not even in workplace law). Even if you suspected someone had inappropriately accessed your personal details, the company almost certainly has an IT policy saying that you shouldn't be using your computer for personal use. You're screwed.

However these inspection boxes drastically change the situation to what it was before, because most people outside the IT department won't even know about this drastic change in capability. And likely they try hard not to think about until (if) they discover something has happened. I'm sure everyone here is aware of the abuses of NSA employees spying on their wifes, or ex-es [0]. Such is human nature.

Sadly I suspect that all these arguments will do little to change the situation. It seems more likely that the companies who deploy these systems are only going to listen to arguments specifically about how it is increasing work, causing delays of deliveries and affecting project costs.

[0] http://www.reuters.com/article/us-usa-surveillance-watchdog-...

8
jusob 13 hours ago 3 replies      
It does not have to be. Done correctly, SSL interception can pass through all the errors to the client:

* certificate issues (expiration, domain mismatch, etc.)

* OCSP/CRL verification

* validation of HPKP header

I understand that few vendors may be doing it (I know one which does at least the first 2). Probably the worst offense is choosing the weakest TLS version + cipher to save resources, like using TLS 1.0 because it take less resources to decode/encode than TLS 1.2 + elliptic curve.

9
seasonalgrit 5 hours ago 1 reply      
any recommendations on tutorials/guides for better understanding the world of TLS, certificates, and so on?

i don't feel like i have a healthy mastery of the ideas discussed in articles like this one.

10
evilDagmar 13 hours ago 2 replies      
TL;DR: If your organization is going to do HTTPS interception, don't screw it up.
11
jaimex2 11 hours ago 1 reply      
Reading the title

... no shit.

12
synicalx 11 hours ago 4 replies      
It might weaken TLS, but it also stops the 3000 head of cattle I managed from being able to watch porn (6 incidents) and torrent movies (47 copyright notices). If their internet banking, which they're not supposed to be doing at work, gets compromised then I really couldn't care less.
19
Dwarf Fortress creator Tarn Adams on simulating most complex magic system ever pcgamer.com
120 points by danso  14 hours ago   26 comments top 4
1
robocaptain 3 hours ago 0 replies      
This game and it's creator(s) never cease to amaze me. Looking forward to all of the new "fun" this will create.
2
meddlepal 2 hours ago 4 replies      
I really wish DF was OSS. My great fear for this game is the developer dies and we lose an amazing piece of tech with him.
3
db48x 7 hours ago 1 reply      
Having world-gen create a unique magic system for each game sounds like it'll be a lot of fun.
4
dagenleg 2 hours ago 0 replies      
Wow! The creator really knows what he is talking about.
20
Nintendo to Double Production of Switch Console wsj.com
238 points by Tiktaalik  9 hours ago   251 comments top 20
1
baby 46 minutes ago 0 replies      
I love this console, haven't been able to sit down and play for long hours in decades and here I am.

I love that it's handheld, I travel a lot and so far I've played that in the plane and in various airbnbs, splitting the controller in two and playing snipperclips or bomberman with my girlfriend.

It's probably the best console I bought since the N64. I see a lot of skepticism here, but I assume it is mostly from non-buyers. I predict that this is going to be the biggest christmas for Nintendo this December.

2
learc83 8 hours ago 13 replies      
Breath of the Wild is the best game I've ever played. I'd recommend getting a switch just for Zelda alone.
3
knodi123 1 hour ago 1 reply      
Good, because I absolutely refuse to pay scalper prices. Now if someone could figure out the left-joy radio issues, I'd buy one ASAP.
4
DaveWalk 1 hour ago 2 replies      
Was it true that Nintendo held back some of its production of earlier consoles to spike demand? I remember it specifically with the Wii[0], but I seem to recall that their production was always playing catch up. Do you think it's an industry tactic?

[0]http://www.cio.com/article/2434123/supply-chain-management/n...

5
lobotryas 5 hours ago 2 replies      
And here I am still waiting to buy a NES Classic. Maybe next year...

Looking forward to getting the Switch because I always liked Nintendo's handhelds and this qualifies. Just hope they work out the design/quality issues by the time stock catches up to demand.

6
pmorici 2 hours ago 3 replies      
I wish they would double production of the Nintendo Classic. I can't find it anywhere except places that are charging 3x the list price.
7
naringas 7 hours ago 2 replies      
I'm betting on the fact the nintendo switch will be able to browse the web and reproduce music and video (youtube, netflix).

I understand that they didn't officially support this at launch probably because of time constraints and priorities but it just doesn't make any sense for the switch not to support this in the longer term. It's form factor is perfect for it.

8
mpg33 2 hours ago 1 reply      
I don't think launch sales are a good indicator to go by...

There is enough of a hardcore Nintendo fanbase to buy up initial launch supply.

Let me know of the sales in July.

9
dcw303 8 hours ago 5 replies      
Good. I'm getting sick of walking into Bic Camera and asking the same question when I already know what the (disappointing) answer is going to be.

I get that they're a typical conservative Japanese company, they don't want to screw up inventory, blah blah blah, but it's getting annoying.

And slightly off topic, does anyone know what is up with the NES classic / Famicom mini? As far as I can tell that thing never restocked after launch.

You can see how it looks like Nintendo is majorly shooting themselves in the foot with these slow launches.

10
mrfusion 3 hours ago 0 replies      
Does this production come at the expense of the nes classic though? For some reason I'm really mad I can't buy that stupid thing.
11
oculusthrift 7 hours ago 2 replies      
what about the nes classic which came out a long time ago and they're clearly trying NOT to sell??
12
shmerl 50 minutes ago 0 replies      
That will help Vulkan adoption.
13
phyushin 1 hour ago 0 replies      
Login to see the rest of the story :-/
14
mrmondo 6 hours ago 0 replies      
I'm a sucker for getting lots of gadgets and then shelving them, the switch is the first game console / handheld I actually genuinely appreciate both for its quality and for its marvellous simplicity since the game boy advanced, it's truly wonderfull. It's has its flaws in that there aren't many release titles but Zelda is amazing and it's just a joy to use. Mad rep out to Nintendo for pulling one out of the hat.
15
Karunamon 1 hour ago 2 replies      
Late again. Nintendo is comically bad at meeting demand.

Every time they have a highly anticipated product release, there are many multiples fewer products on the shelves than there is demand, for months at a time, and meanwhile, eBay prices spike to 200%+ retail. Meanwhile, consumers are disappointed/annoyed, scalpers profit, and Nintendo leaves money on the table.

I wonder how many Switches haven't been sold since people got tired of waiting and just picked up Breath of the Wild for the Wii U, which is now going for fire sale prices?

17
qz_ 8 hours ago 1 reply      
Could someone with a subscription copy the text behind the paywall?
18
unlikelymordant 9 hours ago 0 replies      
Its paywalled
19
DaveSapien 6 hours ago 0 replies      
WSJ? It's a wonder they're not linking Nintendo to the Nazi party...
20
Stimpek 9 hours ago 5 replies      
21
Why Virtual Classes Can Be Better Than Real Ones (2015) nautil.us
101 points by DiabloD3  14 hours ago   23 comments top 12
1
RobinL 7 hours ago 2 replies      
I've been working as a data scientist for a few years now, primarily self taught using the wealth of high quality videos and other materials online.

I recently worked as a teaching assistant on an expensive and well-regarded data science bootcamp course. I did this primarily because I was curious what classroom-based training in data science was like.

I found it much less useful than online materials. It seemed like the primary benefit was just that once you've stumped up several thousand pounds, you're motivated to actually show up and concentrate. And you also get to meet some people with a common interest.

But the quality of teaching fell well below the standard online (not surprisingly, when online you have access to lectures by superstars like Hadley Wickham, Peter Norvig and Andrew Ng, with lectures that have been carefully recorded).

And on top of that, it's not a great use of time. A half hour to commute to get to the lecture. Then the lectures being in 'real time'. Online, I find myself running video lectures at 1.5 or 2x speed for large portions of the material - the 'filler' - and then having to watch, re-watch the hard concepts several times at normal speed and pause just to think.

I found that at the end of the classroom course, I had developed an awareness of the existence of concepts and techniques, but not really an understanding of them.

2
interestingemu 2 hours ago 0 replies      
Jesus, it took me way too long to realize this was about online classes, not virtual classes in languages. :-)
3
visarga 11 hours ago 2 replies      
> This is particularly true in the fraught area of STEM (Science, Technology, Engineering, Math), where difficult explanations often cry out for a student to replay a portion of a lecture, or simply to take a pause while comprehension works its way to consciousness

Or to do a couple of exercises until students really grasp the subject matter. Often it feels like we can grasp something in theory, only to be stumped by practical applications. One of the greatest benefits a course could have is example problems, with fully explained solutions. If the problems are fun to work through, then it makes for a good course.

What I'd like to see is a promise on the likes of:

> Do these 100 problems and you'll grock X (where X is probability, calculus, functional programming, deep learning, etc)!

And the problems be well chosen, well explained and fun to work out.

4
treehau5 53 minutes ago 1 reply      
I think a lot of this is stemming from the deeper issue that kids are just tired of being charged absurd amounts of money for a cookie cutter syllabus based on a text book, and then tested on it, while the college shoves as many kids in classrooms and labs as possible and praising their "% of grads that find jobs immediately after graduating!"

As a reaction to that, yes I agree. However, we will be fooling ourselves if you think that anything compares being next to a real teacher. Someone who has intimate knowledge of the subject. Someone who truly cares about teaching and guiding. I can think of all the MIT OpenCourseWare lectures I watched where my mouth dropped to the floor and I said to myself "Gosh I really wish I could of had a teacher like that at my University" And I can only imagine being able to actually go up to their office and get personal 1 on 1 time with them, or work along side some of them in research...crazy, but getting that experience at some state college in middle america somewhere is just a rarity, a needle in a haystack.

5
hergin 23 minutes ago 1 reply      
My main perspective on the issue: If I want to learn a completely new topic, I like it to with a teacher standing in front of me. Otherwise, I can take an online course just to recall some topics and update my outdated information.
6
gcdes 23 minutes ago 0 replies      
DataCamp is what got me my current job - I think the extra flexibility is what makes virtual classes better than real ones
7
itsmemattchung 2 hours ago 0 replies      
Virtual classes are better in some ways and inferior (to in person) in others.

I'm currently applying for a masters in computer science and debating between attending a program in person or online. One advantage of attending in person is immersing myself in a class room full of students, aligned with a similar goal in mind. An online course, on the other hand, offers me more flexibility (I'm working full time at AWS) and saves me time on commuting.

Although I'm constantly studying on my own and leveraging online courses, I'm leaning towards an in person program for a couple reasons. First, I recently moved to Seattle and the university of Washington (UW) offers a program tailored for working professionals; this is a great way to not only meet other people on a similar path, but, most importantly, receive in-person feedback from the professors (as well as other students). Feedback, as well as networking, is really difficult to replicate with an online format.

On an separate note, I have yet to find an online masters in CS that offers a thesisall the programs I researched only offer course based work.

8
colmvp 9 hours ago 0 replies      
So I've been taking online courses on the side for the last little while, including the author's often recommended, Learning How to Learn. I guess I was spurred on because I felt like I wanted to use my spare time doing something outside of playing games during the winter.

I have to say that watching lectures on Khan Academy, Udacity, or YouTube rekindled my love of learning. Now, I look forward to a block of spare time when I can watch a video and test out my knowledge by practicing on various quizzes from university sources, or by picking up a book with harder examples. And of course, one major advantage is that if I get stuck on a term or concept, I can rewind the lecture, or search on Google for more explanation, or find yet another YouTube video with another take on it. And if I get stuck, I recognize to take one step back and take a refresher on more fundamental concepts.

I think that's why I prefer it over being back in school where the quality and personal connection to the teacher varied so heavily, and the class/course just kept going regardless of your comprehension. We all know those one or two teachers growing up who were so exceptionally bad at teaching a particular subject that it's quite probable some students stopped going in a certain path because of that struggle. Just look up popular math or biology videos, and you'll find students who leave comments like, "Wow you just explained in five minutes what my teacher/prof tried teaching me for x months." I know so many smart people who don't think they could excel in variety of subjects because of bad experiences in school, treating it like a mark of permanence. And it didn't help that some teachers honestly believed if you didn't get it then, well tough luck, you're hopeless.

I still much prefer in-class instructions, largely for the communal experience of learning with others and because the curriculum tends to be more rigorous than online courses. But I find online videos, and write-ups far more effective at teaching me complex things over the average teacher/professor. And one can sometimes make up the difference in difficulty by taking up personal projects or finding difficult quizzes/tests online.

9
mrits 11 hours ago 0 replies      
I worked on a few projects where I wished the other coders only wrote virtual classes.
10
cadillackness 10 hours ago 0 replies      
The only problem with MOOCs I've had is that they assume the message boards are usable (they're not) and basically non-existent study groups. The boards are insanely slow and they're no substitute for office hours or a study group. Study groups are hard to put together because most people don't have 2-5 people in real life interested in taking the course and have time to come together on a Friday night. Though I'm a little embarrassed to admit it, those two factors have caused me to "fudge" the homework a little bit because there's just no way I can tackle something like that on my own and finish it on time. I still learned the material, but there's always the guilt that I "cheated" because I just don't have the time to finish it without any external help.
11
santiagobasulto 4 hours ago 0 replies      
I don't know about MOOCs, but I teach Programming online (we have a remote bootcamp) and I prefer the online experience 1000 times more than the physical experience. The amazing groups of people that we create, having folks from all around the world is invaluable. They feel comfortable, secure, and motivated.
12
mrdmnd 12 hours ago 3 replies      
Ok, I'll bite - I thought this was an article about C++.
22
Lisping on the GPU [video] youtube.com
96 points by rutenspitz  14 hours ago   17 comments top 8
1
ndesaulniers 19 minutes ago 1 reply      
This was awesome. This makes me want to learn lisp! What's folks' favorite into/reference?
2
thom 6 hours ago 1 reply      
This is cool, and is actually quite a nice intro to the model underlying GL even outside the lispy stuff.

If, unlike the presenter, you don't mind higher level engines and stuff, you might like Arcadia, which brings Clojure to Unity:

http://arcadia-unity.github.io/

3
amelius 3 hours ago 1 reply      
I quickly glanced through the video without sound, but I wonder: is this video about running graphics primitives from LISP, or is it about speeding up LISP using a GPU? (I hope the latter).
4
mathnode 5 hours ago 0 replies      
Great talk, I watched this yesterday.

Check out his lisp repos here: https://github.com/cbaggers?tab=repositories

More CEPL and related videos on his Channel: https://www.youtube.com/user/CBaggers/videos

5
vanderZwan 7 hours ago 1 reply      
Haven't watched the video yet because I'm at work, so pardon my likely misunderstanding of what the video is about, but wasn't there also someone working on something like this in Racket? Are you aware of each other's work?
6
freekh 6 hours ago 0 replies      
Can't wait to the next hackathon I am going. I am so going to play around with this!
7
afghanPower 6 hours ago 1 reply      
When was this meetup? Shame I didn't know about it :(
8
ino 13 hours ago 1 reply      
constant audio cuts make this unpleasant to watch.
24
Guetzli: A New Open-Source JPEG Encoder googleblog.com
509 points by ashishgandhi  22 hours ago   111 comments top 28
1
londons_explore 18 hours ago 4 replies      
This seems to be optimizing for a "perceptual loss function" over in https://github.com/google/butteraugli/blob/master/butteraugl...

Looking at the code to that, it looks like 1500 lines of this:

 double MaskDcB(double delta) { PROFILER_FUNC; static const double extmul = 0.349376011816; static const double extoff = -0.894711072781; static const double offset = 0.901647926679; static const double scaler = 0.380086095024; static const double mul = 18.0373825149; static const std::array<double, 512> lut = MakeMask(extmul, extoff, mul, offset, scaler); return InterpolateClampNegative(lut.data(), lut.size(), delta); }
The code has hundreds of high precision constants. Some even seem to be set to nonsensical values (like kGamma to 0.38) Where did all of them come from? The real science here seems to be the method by which those constants were chosen, and I see no details how it was done.

2
pjsg 18 hours ago 5 replies      
As the author of the original libjpeg (back in 1991), I think this has been a long time coming! More power to Google.
3
vladdanilov 21 hours ago 4 replies      
I'm working on a similar thing (http://getoptimage.com). While Guetzli is still visually better and a bit smaller in file size, it's terribly slow and requires a lot of memory. But it's a great experiment. So much knowledge has been put into it.

I believe using a full blown FFT and complex IQA metrics is too much. I have great results with custom quantization matrices, Mozjpeg trellis quantization, and a modification of PSNR-HVS-M, and there's still a lot of room for improvement.

4
mattpavelle 21 hours ago 2 replies      
I'll run some of my own experiments on this today, but I'm initially concerned about color muting.

Specifically looking at the cat's eye example, in the bottom of the pupil area there's a bit of green (reflection?) in the lower pupil. In the original it is #293623 (green) - in the libjpeg it is #2E3230 (still green, slightly muted). But in the Guetzil encoded image it is #362C35 - still slightly green but quite close to grey.

In my experience people love to see colors "pop" in photos (and photography is where JPEG excels) - hopefully this is just an outlier and the majority of compressions with this tool don't lose color like this.

5
i80and 21 hours ago 0 replies      
Some comparison with the mozjpeg encoder here: https://github.com/google/guetzli/issues/10

TLDR:

> We didn't do a full human rater study between guetzli and mozjpeg, but a few samples indicated that mozjpeg is closer to libjpeg than guetzli in human viewing.

6
onion2k 20 hours ago 5 replies      
Sort of related, but what's the story with fractal image compression? When I was at university (~20 years ago) there was a lot of research going in to it, with great promises heralded for web-based image transfer. There was a Netscape plugin that handled them. They seemed to just disappear in the early 2000s.
7
shmerl 36 minutes ago 0 replies      
How does it compare to mozjpeg?
8
dmitrygr 21 hours ago 2 replies      
Cool, but neither the article nor the paper (https://arxiv.org/pdf/1703.04416.pdf) mention just how much slower it is.
9
ZeroGravitas 1 hour ago 1 reply      
How relevant to web pages is this?

The blog makes it sound like that's the target but the paper has this line:

"Our results are only valid for high-bitrate compression, which is useful forlong-term photo storage."

Do the author's think the size/quality benefits still show up when targetting lower bitrates/qualities that are more common on the web? Do they intend to try to prove it?

10
SloopJon 21 hours ago 3 replies      
The Github README says, "Guetzli generates only sequential (nonprogressive) JPEGs due to faster decompression speeds they offer." What's the current thinking on progressive JPEGs? Although I haven't noticed them recently, I don't know whether they're still widely used.
11
sschueller 19 hours ago 5 replies      
Lots of Swiss German coming from Google lately. Zpfli, Brtli and now Guetzli.

I'm still hoping for a Google Now that understands Swiss German :)

12
kozak 19 hours ago 0 replies      
Wow, "webmasters creating webpages" is something that I haven't heard for a very long time! I'm nostalgic.
13
rurban 4 hours ago 0 replies      
Nice Makefile, jirki. I really have to look into premake which generated it.

But I get a gflags linking error with 2.1 and 2.2 with -DGFLAGS_NAMESPACE=google. This is atrocious."google::SetUsageMessage(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)"Without this it works fine.

Guess I still have some old incompat gflags headers around.

EDIT: The fix on macports with 2.2 installed default into /usr/local is: make CXX=g++-mp-6 verbose=0 LDFLAGS="-L/usr/local/lib -L/opt/local/lib -v" CFLAGS=-I/usr/local/include

i.e. enforce gflags 2.2 over the system 2.1

14
discreditable 20 hours ago 0 replies      
15
ktta 21 hours ago 1 reply      
I wonder how Dropbox's Lepton[1] compresses JPEGs encoded using Guetzli. Since they already pack more info/byte would there be noticeable compression?

Someone out there must have tried this.

[1]:https://blogs.dropbox.com/tech/2016/07/lepton-image-compress...

16
kzrdude 17 hours ago 1 reply      
I wonder, does google's blog pick up that I can't read their web page due to javascript blocking? Do they evaluate how many readers are turned away due to such issues?
17
j0hnM1st 14 hours ago 0 replies      
18
iagooar 4 hours ago 0 replies      
This Swiss naming of algorithms really gets old, especially if you speak (Swiss) German...
19
d--b 20 hours ago 5 replies      
Why spend a lot of time improving jpeg instead of spending time promoting a HEVC-based standard like that one? http://bellard.org/bpg/
20
drdebug 3 hours ago 0 replies      
This really looks great. I really wish the author(s) could provide a detailed overview of the human vision model algorithm being implemented, what it is doing and why, so we could reproduce an implementation, may be even provide improvements? Otherwise amazing work.
21
wildpeaks 9 hours ago 1 reply      
Just tried the precompiled binary from:

https://github.com/google/guetzli/releases

and I'm getting "Invalid input JPEG file" from a lot of images unfortunately.

22
whywhywhywhy 15 hours ago 0 replies      
Great tech, shame about the name
23
leetbulb 21 hours ago 0 replies      
I use ImageOptim (https://imageoptim.com) for small tasks. For larger tasks, https://kraken.io is nuts.
24
kup0 16 hours ago 0 replies      
So far the results have been useful for me. It's been able to reduce size on some tougher images that other optimizers would ruin quality-wise.
25
tannhaeuser 19 hours ago 1 reply      
Is JPEG2000 with progressive/resolution-responsive transcoding still a thing or is HTML <picture> the way to go for responsive images (or maybe WepP)?
26
corybrown 21 hours ago 3 replies      
Very cool. I'm not an expert, but does JPEG generally have a ton of flexibility in compression? Why so much difference in sizes?
27
mkj 12 hours ago 0 replies      
Nice work. And yet google images still has horribly compressed low resolution thumbnails...
28
phkahler 20 hours ago 0 replies      
Does it support 12-bit jpeg?
25
In Manufacturing and Retail, Robot Labor is Cheaper Than Slave Labor Would Be 60secondstatistics.com
119 points by helmchenlord  8 hours ago   134 comments top 18
1
cobookman 8 minutes ago 0 replies      
This rings home with me soo much. I've interacted with a few startups designing robots to replace human labor. Was scary to see that automation is here its just too expensive compared to human labor. But technology generally decreases in cost over time while human labor gets gradually more expensive. So its only a matter of time that all manual labor is replaced by robots.
2
kevinr 6 hours ago 1 reply      
As is only appropriate, given the word's etymology.

> robot (n.) 1923, from English translation of 1920 play "R.U.R." ("Rossum's Universal Robots"), by Karel Capek (1890-1938), from Czech robotnik "forced worker," from robota "forced labor, compulsory service, drudgery," from robotiti "to work, drudge," from an Old Czech source akin to Old Church Slavonic rabota "servitude," from rabu "slave," from Old Slavic orbu-, from PIE orbh- "pass from one status to another" (see orphan).

http://www.etymonline.com/index.php?term=robot

All that's old is new again.

3
DanielBMarkham 6 hours ago 6 replies      
There are some interesting lessons to learn from history here that I'm not seeing brought up.

Looking across multiple cultures and economies that had slavery, slavery has a negative impact on both slaves and slaveholders.

Of course, nobody cared about the impact on slaveholders while there were actual humans being enslaved, but as we move to a robotic society? This is going to be a huge deal. Slaveholders and multi-generation slaveholding families have a fundamentally different way of looking at themselves and their culture than people who do not own slaves. Once we enter an era where every person is effectively coddled by multiple robotic "slaves" that do their every whim, we're going to be hacking into the human social ecosystem in ways never anticipated before.

4
rejschaap 7 hours ago 10 replies      
"If it takes $2,000 to install what is basically an iPad and stand for customers to order from at McDonalds or Chipotle, a restaurateur is looking at less than a month before recouping their entire investment if they eliminate just one cashier position."

Obviously it takes a little bit more than that. You need to develop the software that runs on the kiosk. You also need a back-end system so the kitchen knows what to prepare. So the investment is a bit higher than that. But you will recoup it very quickly on McDonalds scale. And they will be able to provide better and faster service.

5
6d6b73 3 hours ago 2 replies      
Automation will turn capitalism will turn into somewhat benevolent corporate city-states. Imagine a city run by a corporation which tries to automate, and optimize everything that is not its core business. To make the employees and their families happy, the corporation will provide the best care (child, health, environment) possible. In time this will turn some of these city-states into efficient, clean, healthy, happy places to live, but only relatively small groups of people will be able to enjoy it. Some of these city-states will be very dystopian and people living in there will be miserable. Technological and societal progress in these "utopian" cities will be much faster than in "dystopian" which will possibly lead to wars.

We will not solve automation driven unemployment by taxing robots, and UBI will generally not work on a country wide scale. Partial solution will be Corporate UBI, which basically will mean that if you work for Corp X, you and your family will have everything they desire provided for them. As for everyone else..

This is already happening on some smaller scale. All these corporate campuses are beginning of that. They will eventually grow to become self-sufficient cities.

Now the question is - when you and your family depend on one entity, i.e corporation that has hired you, are you not a slave to them?

6
k_sze 5 hours ago 0 replies      
So it's now harder to say with a straight face "we're using robots because it's more ethical than exploiting people".

"No, it's just cheaper."

7
AKifer 6 hours ago 4 replies      
Sooner or later, the robots and AI will be able to provide 100% of humanity material needs. AND The very nature of each societies will be shuffled by that new reality.

When every material need is fulfilled, a lot of questions arise:

1- What's the essence of private property when the working robots can already fulfill all the needs of the humanity ?

2- What's the essence of political power where nobody feels anymore the need to elect good policymakers because their life is already perfect ?

3- What will be the safeguard to prevent a maleficient/egoist minds to lock the access to all that abundancy ? That's quite philosophical question as humanity never experienced that kind of pure evil mindset. Every dictatorship, slavery, oppression were always driven by the context of competition towards the control over a limited economic resources.

4- And fundamentally, what will be the next thing that will drive the humanity towards evolution ? Knowledge curiosity ? Space exploration and adventures ? Spiritual achievement ? Perfection (and what's perfection ?) ? Are these goals philosophically equal ? Do willingness/laziness to adopt such a noble goals affect your share in the pie ? Does even "share in the pie" matter when the pie have an infinite surface ?

Only the future, and futuristic/philosophical writings will tell us where all that game will lead this world.

8
maxerickson 1 hour ago 0 replies      
Cost effectiveness is also why farmers use tractors and other large machinery.
9
singularity2001 2 hours ago 0 replies      
Just to throw in a thought/data point:

Nvidia is currently selling their 'industrial' deep learning system DGX-1 for $129,000. It doesn't have the IQ of a mouse yet, yet it can beat humans in some tasks (as can mice?).

[0] http://www.nvidia.com/object/deep-learning-system.html

10
WalterBright 6 hours ago 1 reply      
Slave labor is economically inefficient, and slave based economies have fared very poorly compared with free labor economies.
11
squarefoot 6 hours ago 2 replies      
Which will hopefully brings us to an inevitable change in our economics. If technological advance is going to create millions if not billions of unemployed people, the answer won't be rioting down the street and burning all machines in sight, although for many people this will appear the only viable solution.
12
almavi 7 hours ago 2 replies      
My first thought was: "Oh, great! If slave labor is not efficient anymore there will be no more slave workers in the world". On a second thought (and based on our history), that probably will be true, but only because people that today are working just for a plate of rice will starve to death.
13
legulere 6 hours ago 1 reply      
> On the other hand, if institutionalized slavery still existed, factories would be looking at around $7,500 in annual costs for housing, food and healthcare per worker.

What makes a lot of things like food so expensive is human work. If you had access to slaves you could probably also reduce those costs.

We still have a lot of people living for under 1$ per day on this world. That's a factor 20 off from the cited amount of money per year.

14
thr3290 6 hours ago 0 replies      
> On the other hand, if institutionalized slavery still existed, factories would be looking at around $7,500 in annual costs for housing, food and healthcare per worker.

There is a wrong assumption that factory has to cover those expenses. In reality this cost is often offloaded to government or another party.

15
mschuster91 7 hours ago 4 replies      
The consequences for societies that define the status/value of their members based on their employment/job will be disastrous. In, let's say, 20 years robots most likely will have overtaken agriculture, manufacturing and driving - by far the biggest job providers.

And I see no movement at all by our politicians to prepare societies for this shift, except a couple countries playing small scale UBI... and the USA actually try to go the opposite route.

16
dsjoerg 3 hours ago 0 replies      
anyone know who/what is behind this site?
17
jgalt212 4 hours ago 0 replies      
Robots, or no robots, it's very important from a national security perspective to have a sizeable domestic manufacturing base.
18
elastic_church 7 hours ago 0 replies      
forget about the former title of coffee shop barista, our cotton and tobacco trade is about to go into overdrive!
26
Chrome/macOS users in The Netherlands cannot visit google.com or google.nl
130 points by dutchbrit  7 hours ago   50 comments top 25
1
Severian 3 hours ago 1 reply      
Direct fiber connection to Level 3 (Columbus, Ohio) here at work. Had to disable QUIC as I am getting the same errors as well when trying to access Google. Using local DNS, not sure what upstream DNS we are using though.
2
nissarup 6 hours ago 1 reply      
Quicker link to the setting: chrome://flags/#enable-quic

I had the same problem this morning. Disabling QUIC solved it.

3
mgoetzke 3 hours ago 3 replies      
same here in Germany on windows this morning.. also had to disable QUIC.

Funnily I switched to Microsoft Edge for a short while, but have once again realized why its unusable. After about 1-2 minutes using it, the entire browser became unresponsive for about 60 seconds. Afterwards I couldnt press sign-in on YouTube because an invisible IFrame from the OneNote Web Clipper Extension was overlaying it. Nobody seems to be testing those either :)

4
d99kris 2 hours ago 1 reply      
I've had problems with QUIC in various networks (corp, home) mainly in Singapore for more than a year. Until the point I disable QUIC as one of the first steps when I encounter GMail connectivity problems from Chrome/Chromium on a fresh machine.

I find it odd that QUIC is enabled by default when it apparently has poor fallback capabilities to "non-QUIC" mode.

5
atsjie 4 hours ago 0 replies      
Had the same issue. Restart didn't work and Safari was working fine.

For me what worked was removing Google's DNS server 8.8.8.8 from my network settings and voila; it worked.

6
mschwaig 4 hours ago 1 reply      
Is there a more detailed write-up about what exactly is causing the problem?
7
Mojah 2 hours ago 0 replies      
Regardless of this issue, the QUIC protocol is still fascination and deserves some attention: https://ma.ttias.be/googles-quic-protocol-moving-web-tcp-udp...

This looks like an implementation error, either client or server-side.

8
dutchbrit 7 hours ago 1 reply      
Map overview shows that Berlin might be experiencing the same issue: http://allestoringen.be/problemen/google/kaart/
9
daw___ 5 hours ago 1 reply      
Any explaination about this?
10
Carducci 6 hours ago 0 replies      
I am on Ubuntu and have exactly the same issue for google.nl, youtube.com and google.com.

So it is not only on Mac OS X

Edit: And it started working again :)

11
puzzles 5 hours ago 1 reply      
Is this the reason googleapis is having trouble? Maps and fonts are really slow for me today.
12
hanley 2 hours ago 0 replies      
This is happening for me in the US East Coast and disabling QUIC fixed the issue.
13
pjmlp 3 hours ago 0 replies      
I have started experimenting the same last year on our Android devices at a customer network, thanks to QUIC we weren't able to use their network authentication any longer.

For some reason, even disabling it did not help.

Our workaround, was to use the old system browser for authentication and then switch to using chrome after being authenticated.

14
Moter8 4 hours ago 0 replies      
Having the same issue (behind a company network) in Germany.
15
jjpe 7 hours ago 0 replies      
Ah so that's what was going on.The issue insofar I could perceive it is fixed now though.All googly sites are up again.
16
sirolf 6 hours ago 0 replies      
El Capitan latest public beta, with Version 58.0.3029.19 beta (64-bit) no problems.
17
JDevlieghere 5 hours ago 0 replies      
Same here in Belgium :-(
18
OrangeTux 5 hours ago 0 replies      
Dutch here. I can confirm the issue. Problem also appears on Chromebook and Linux. Solution as proposed in an other comment in thread solves the problem.
19
DeepYogurt 7 hours ago 0 replies      
Works for me...
20
daanaerts 5 hours ago 0 replies      
Confirm issue here as well
21
mikkelwf 7 hours ago 4 replies      
Danish google down as well..
22
tinco 4 hours ago 3 replies      
The title is misleading, Google has not broken anything, Dutch providers have. I hope Google will do nothing, and the providers will be pressured into fixing their shitty hardware.
23
bamb00zl3 3 hours ago 0 replies      
good time to switch to startpage.com....
24
jbverschoor 4 hours ago 0 replies      
No problems here. Must be one of the shitty providers like ziggo or kpn
25
1_player 3 hours ago 1 reply      
I'm going to hijack this thread to describe something weird that's happened to me yesterday w/ Google: just connected on my usual BT (UK) wifi, and when I opened google.com, Safari complained about an invalid SSL certificate: it was self-signed, with expiration in 2111. After a dozen refreshes, everything started to work, with the correct certificate.

Anybody seen anything like that? Is it possible that a corrupted packet could appear as a self-signed certificate? Did some MITM screw up?

27
Animista: a collection of ready to use CSS animations animista.net
490 points by tilt  21 hours ago   49 comments top 16
1
notum 9 hours ago 1 reply      
Awesome stuff, thank you!

A tiny wee bit of an issue: Copying to clipboard does not work on firefox dev edition, had to get the code from source as selecting was disabled on the <pre>.

Also, why force downloading the whole library? Why not append the keyframes used to the code to be copied as an alt box entitled "Just want to use this one animation? Copy this!".

2
sova 14 hours ago 0 replies      
Lovely. Thanks! In a couple ways (flip left / flip right vertical) it is just what I was looking for. Thanks for sharing this, and I like the loading image. You did a good job with this and it would be nothing unusual to declare you a CSS ninja.
3
grepthisab 16 hours ago 0 replies      
This is great, I echo what others have said about getting the code. I would definitely pay for this.

One thing though, I've been clicking back and forth between this thread and the site and everytime I do I have to go through that animista loading image again. Got kind of annoying after a couple times. Maybe do something else, or just make it so it appears only the first time or something?

4
raspo 18 hours ago 1 reply      
I really like this collection, this is going to turn out very useful for me in the future.

With that said, the website/webapp behaves kind of weirdly in my opinion.I would have never ever thought to click on the heart icon in order to select an animation for later download. I assumed that was some form of BS sharing feature.Also, when clicking on the "code" icon { } I wanted to see the underlying animation, not just its implementation.

5
talmand 1 hour ago 0 replies      
For those that are concerned whether the animations work on mobile or not.

They do.

Well, as long as the browser of your choice supports the CSS properties in use. Which, at this point it would be easier to keep a list of browsers that do not support such things as opposed to ones that do.

It's just the site itself misbehaves on mobile. To test the animations on a phone, just tell your mobile browser to request the desktop site. You should be able to see the page then. More than likely that is, your experience may vary. I was able to see the desktop site on my Android phone. I was just unable to cycle through all the different animations to see them in action or choose them. That has more to do with how the UI of the site is built, nothing to do with the animations.

6
throwaway2016a 19 hours ago 0 replies      
I feel almost paralyzed by choice looking at this site. Kind of like when trying to choose a font and you have 200 on your computer. But the grouping is nice and a lot of these look very interesting. Definitely bookmarking.
7
grumblestumble 18 hours ago 0 replies      
This is really cool, and very nicely packaged. I agree with some other commenters that showing sample html would be a good improvement, but in general this is awesome.

That said...

Is it acceptable to animate `box-shadow` these days? Thought that was a pure framerate killer on mobile.

8
jcwayne 3 hours ago 1 reply      
With great power comes great responsibility. Please use these sparingly.
9
petarb 16 hours ago 0 replies      
I wish it worked on mobile...
10
ArneBab 19 hours ago 1 reply      
These look great! Whats the licensing of the generated code? Can I use these with cc by-sa projects?
11
sergiotapia 18 hours ago 1 reply      
Pulled in a simple animation for a feature I'm working on, works like a charm. This is awesome!

I can't find the license though, and won't push to prod until I do.

12
rajangdavis 17 hours ago 0 replies      
Checking this out on mobile and the animation for the service not being available on mobile is fluid as heck... really nice work!
13
andrewstuart 18 hours ago 2 replies      
A possible business model is to charge $1 to grab the source code for a snippet, or some sort of monthly fee for wider access. It might be easy to rip off via "inspect", but there's alot of people in this world who pay when required.
14
obiefernandez 15 hours ago 1 reply      
These are great. The CSS code generation is awesome.

Can't get any of it to work on my site. No instructions on how to make it work, other than copying the CSS.

Blerghhhh

15
someguy101010 20 hours ago 2 replies      
It would be nice if we had the html for some of the flip content, especially for the a/b content
16
dbg31415 20 hours ago 3 replies      
These are cute and seem like they have promise, but the fact that the site won't load on mobile means I won't have confidence to use them for any mobile-first web project. Hope you add mobile compatibility soon!
28
OpenEMR: Electronic Medical Records and Medical Practice Management Software open-emr.org
321 points by mabynogy  19 hours ago   172 comments top 25
1
JusticeJuice 17 hours ago 10 replies      
I'm currently working on my thesis, I'm trying to design a radically different take on EMR/EHR systems - http://barnett.surge.sh/

I've come to realize healthcare software is a significantly trickier problem than most realize. Not in terms of technical possiblity, but other factors.

Building healthcare software is really hard - which seems like such a paradox, because surely ensuring highly trained individuals have half decent tools would be a huge goal for society - but almost all healthcare software is still frankly, terrible.

After heaps of reading I've narrowed it down to 3 reasons why healthcare software is so hard to build.

1 - Modern healthcare is complex, and varied. Building any healthcare softawre is no easy task, even simple systems have many other systems to integrate with. But what makes this worse, is that no two places do healthcare quite the same. Between regions, and institutions, healthcare can vary a shit ton.

2 - Healthcare is risky. Sounds obvious, but if your software fails, people die - but this risk creates an attitude of "dont touch it". Changing systems has such a risk, that institutions will use the same software until it's so out of date its not funny. This impeeds improvement and innovation.

3 - Healthcare software is hard to sell. Not in the sense that its unprofitable, but that it takes ages to get users. Say if I made video editing software - I could get professionals trying it out tommorrow. However, if I make a PMS, I have to sell it to the entire practice - not just a few keen doctors. From making contact to actually getting a small practice using your software could literally be years - making it super hard to enter the market.

I've been working on this topic fulltime for a year now - and I must admit, I don't have the golden answer. However, I'm designing a system as a response to these factors, an alternative to interoperability, and would love to talk anybody in the industry about it.

http://barnett.surge.sh/

eliot.slevin@gmail.com

2
messo 2 hours ago 2 replies      
Norway has been building a universal and state-run system the last couple of years, which gathers all the patient information in one centralized system. Existing commercial companies that already deliver solutions to doctors and hospitals have integrated this new system into their existing products. It is practically interoperable by now. I believe this is the way to go. The system was beta-testet last year, and is being rolled out to doctors offices this year.

This will benefit both doctors and patients, as they no longer need to manually juggle between countless different systems (and plain old paperfiles). Overmedication (and dangerous drug interactions) has been a real problem for decades, because one doctor does not know what the other has been doing. When a patient has to visit different hospitals, they have to take new blood samples, tests etc.

Patients can now log in to one service (helsenorge.no), with the same security as other state-run services (taxes etc), they get access to their complete health journal, they can order an appointment at their local doctors office, order an online consultation, reorder prescriptions etc.

Maybe something for other countries to learn from?

3
jdhe 18 hours ago 7 replies      
As other comments mention, this project has been going for a while, the central reason it (or something like it) has not been robustly built out and deployed universally is because of the way congress chose to structure the funding for development of EMRs/EHRs. Rather than trying to find a system that could be universalized, the (idiotic) plan has been to give millions of dollars to multiple corporations to develop systems, then subsidize the purchase cost to providers to encourage adoption. Then, only once everyone has invested in their own proprietary system will they begin trying to universalize the system by developing cross compatibility. This methodology is perfectly in line with the free-market approach that the US has championed for decades. It is also the opposite of how almost every other country has developed and adopted universal EMRs. Sad. Especially because we know that a good, affordable universal EMR would significantly enhance the ability to deliver care for almost all sizes of healthcare providers.
4
ipunchghosts 16 hours ago 5 replies      
EMR is simply too messy for openEMR to put a dent in the issue. I truely wish it were different. If insurance carriers are permitted to operate across state lines in the future, i think there is a chance of a Google or Microsoft getting into the space which I think is the right way to go. Right now, the large companies managing EMR and huge monoliths that simply don't care about innovation. They are like defense contractors to the military.

If the market ever opens up and consumers have more choice, its clear that someone like google or MS would win and I honestly believe really give healthcare a boost in this country.

5
walrus01 14 hours ago 3 replies      
If people had any idea how much antiquated medical software is out there in production running on Windows 98, 2000 or XP and coded in Visual Basic they would run away screaming. It's seriously that bad.
6
ipunchghosts 16 hours ago 1 reply      
As much as I would love this, I simply dont think it will work. Why? Epic is simply too big and has too much of the market and they are too far behind.

I use picnichealth.com to deal with this EMR mess and I love it!

7
kakoni 18 hours ago 1 reply      
We've been gathering various open source health projects here. https://github.com/kakoni/awesome-health
8
hitgeek 19 hours ago 1 reply      
this project has been around for a long time. I always take a look at it when I need a reference system for thinking about data models or ux related issues for health record type projects.

looks like it got a major facelift since the last time I checked it out. that must of been a lot of work. congrats to the contributors for making that happen, looks more modern.

9
brianmartinek 16 hours ago 2 replies      
Does anyone know of a EMR system that could house my household/personal health data? One of the struggles I have is organizing my own health data in a singular place with structured data formats, document uploads, etc.

The closest thing I have found is http://mymedicalapp.com/ but it appears to have been abandoned by the developer.

10
rdudek 17 hours ago 2 replies      
How are they planning on pushing it to hospitals? All major players have adopted Epic EMR, at least here in Colorado. And once they have a system, they'll stick with it due to premium support and huge cost will prevent switching.
11
davidlee1435 17 hours ago 0 replies      
12
Skeletor 13 hours ago 0 replies      
I think it's great that there are projects like http://www.open-emr.org/ , https://oscar-emr.com/oscar/ , http://openmrs.org/ , and https://www.hl7.org/fhir/http.html ;

I think these tools would be even better if they released their code under LGPL (instead of GPL) so that lazy commercial EHR developers would reuse and help maintain some core modules to promote more interoperability.

I don't think it's going to ever be possible (or that it is even desirable) to ever have a "Universal EHR" that everyone is forced to use through either government intervention or through market/economic forces. We can all exchange emails with each other, but we aren't all forced to use the same email client

The reason that the entire healthcare system seems broken to most consumers is legacy EHR systems in large hospitals. These legacy enterprise vendors are essentially what Oracle was 20 years ago in the rest of the enterprise software market before companies like Salesforce.com came along. Another part of this "broken" feeling is the difficulty of exchanging data between different EHR systems; but this doesn't have to be the case.

"If you've seen one HL7 standard implementation, you have seen exactly one HL7 standard implementation." Which means most systems don't interoperates with anyone else's systems unless there is an existing commercial relationship that forced everyone to interoperate on a local scale.

For anyone looking to work on an idea to improve healthcare check out: https://www.drchrono.com/api/ and/or https://www.drchrono.com/careers/

13
kevinmannix 18 hours ago 1 reply      
This is huge. I previously worked for a healthcare startup, and it seems like for every other healthcare startup we worked alongside or networked with, all roads lead to EMRs. It was (felt like) an impossibly large task without a clear path to execution due to the tangled web of healthcare IT & regulations.

Something like this is huge, especially to potential competitors that now have a target to aim for.

14
tfaruq 14 hours ago 0 replies      
What is the different with OpenMRS http://openmrs.org ?
15
sidcool 12 hours ago 0 replies      
There's a pretty nifty open source OpenMRS app called Bahmni that is widely used in poor countries. A lot of my friends contribute to it. Worth checking.
16
exception_e 18 hours ago 0 replies      
Please read about our groundbreaking v5 release here: https://medium.com/openemr/complete-meaningful-use-certifica...
17
mbrookes 5 hours ago 0 replies      
If you're interested in or working on medical software, you might be interested in Clinical Meteor: http://clinical.meteorapp.com/

"clinical:meteor is an open-source project creating a next-gen framework for building healthcare apps".

18
jasondc 18 hours ago 3 replies      
There is even open source dental software, open source is seriously eating the enterprise (and healthcare) software world!

I run OpenDental for my wife's practice in North Beach, SF. It's licensed as AGPL, but I'll still take it :)

19
SteveNuts 19 hours ago 1 reply      
Having HTTPS enabled on their site would go a long ways for software like this.
20
qrbLPHiKpiux 18 hours ago 1 reply      
I use open dental. Tell me how open it is. An open EMR in healthcare does not mean much. Take a look at the necessary support needed to have the proper compliance.
21
dualboot 14 hours ago 0 replies      
We've had an incredibly well adopted open-source EMR solution here in Canada for quite a while.

OSCAR

https://oscar-emr.com/oscar/

22
lacampbell 15 hours ago 0 replies      
My first permanent IT job was actually working with practice management software. It worked across multiple jurisdictions with fairly minimal changes - an appointment is an appointment in any country.

I actually suspect it violated several laws - I remember I had full access to a database stored offshore where patient notes were there in plain text, along with their names and addresses. I emailed the privacy commission about it but they wanted to me to name names and I was scared of losing my job.

So yeah, I hope you encrypt notes.

23
amelius 17 hours ago 1 reply      
Does it allow anonymization of patient data, so that it can be used for research?
24
edimaudo 15 hours ago 1 reply      
Is it easy to use?Does it reduce the amount of paperwork to be done?How safe is it?
25
peterwwillis 7 hours ago 0 replies      
I came here to ask the question "Has anyone (hopefully) done a 3rd party security audit of this open source medical records software?"

But then I saw this: http://www.open-emr.org/wiki/index.php/FAQ#What_is_ImageMagi...

And then this: http://www.open-emr.org/wiki/index.php/FAQ#What_are_the_corr...

..... http://www.open-emr.org/wiki/index.php/FAQ#What_do_I_do_if_I...http://www.open-emr.org/wiki/index.php/FAQ#What_is_OpenEMR.2...

God help us all.

29
Show HN: Bit A fast and easy Python Bitcoin library github.com
43 points by ofek  10 hours ago   5 comments top 4
1
notthemessiah 37 minutes ago 0 replies      
Am I the only one annoyed by the name? "Bit" isn't very useful or specific, yet it takes up a pretty big parking space.
2
wbond 1 hour ago 0 replies      
Nice to see the project that inspired your recent https://github.com/pyca/cryptography performance work, ofek!
3
wyldfire 1 hour ago 0 replies      
Thanks for testing on pypy and for supporting Python 3! Sometimes it's just a small set of changes to support python 3 and testing on pypy is awesome, so thanks!

bit.PrivateKey.get_transactions -- would there be any way to filter these on transactions since a given block height?

4
aikorevs 4 hours ago 1 reply      
So what is real life case for this ?
30
The Man Who Terrifies Wall Street: U.S. Attorney Preet Bharara newyorker.com
12 points by elberto34  1 hour ago   3 comments top 2
1
burkaman 23 minutes ago 0 replies      
Contrast this with https://www.propublica.org/article/when-it-comes-to-wall-str...

The lesson I take is not that he was bad at his job or intentionally avoided top level investigations, but that nobody really terrifies Wall Street.

2
aaron-lebo 28 minutes ago 1 reply      
Not to be too pedantic but (2016) on this title makes some difference considering recent events.

The Man Who Terrifies Wall Street: former U.S. Attorney Preet Bharara

The Man Who Terrified Wall Street: former U.S. Attorney Preet Bharara

       cached 17 March 2017 16:02:01 GMT