hacker news with inline top comments    .. more ..    22 Sep 2016 News
home   ask   best   3 years ago   
1
Sublime Text 3 Build 3124 sublimetext.com
48 points by tiagocorrea  41 minutes ago   19 comments top 8
1
wkirby 0 minutes ago 0 replies      
Actually the only thing that keeps me from switching back to ST3 is Atoms first class support for `.gitignore` and excluding files from the quick open menu.

I know there's a package that claims to update the file ignore pattern to match the open project, but it really doesn't work well at all.

2
spdustin 22 minutes ago 1 reply      
From the release notes [0]:

> Minor improvements to file load times

I didn't even realize there was room to squeeze out more performance here. Sublime Text is wicked-fast opening pretty much everything I throw at it.

[0]: https://www.sublimetext.com/3

3
derefr 16 minutes ago 3 replies      
> a menu entry to install Package Control

If Sublime is going to acknowledge Package Control, why not just ship with it? I'm sure the Package Control folks would be glad to move their repo upstream.

4
woodruffw 6 minutes ago 0 replies      
Awesome! I'm especially liking the Phantoms API - there's a ton of potential there for richer plugins and graphical inlining.

I've moved between maybe half a dozen editors over the past half-decade, but I always end up coming back to Sublime.

5
sagivo 3 minutes ago 0 replies      
sublime is by far my favorite editor. fast and lots of plugins. specially if you work with big files. i sometimes need to work with files larger than 150MB and it takes few seconds to open. atom crushes and can't even open the files.
6
connorshea 15 minutes ago 4 replies      
I really, really wish it was open source. I understand why it isn't, but with its main competitors being Atom and VSCode, it's hard to warrant using a closed source text editor even if it's so much faster and I'm used to it.
7
pbnjay 7 minutes ago 0 replies      
Whoa very nice new features! Now I need GoSublime to support them!
8
bcherny 17 minutes ago 2 replies      
Does Sublime still exist? With all the hubub about VSCode and Atom, I've sort of forgotten about it.
2
What every coder should know about gamma johnnovak.net
207 points by johnnovak  4 hours ago   55 comments top 24
1
crazygringo 9 minutes ago 0 replies      
This is one of the most fascinating articles I've come across on HN, and so well explained, so thank you.

But I wonder about what the "right" way to blend gradients really is -- the article shows how linear blending of bright hues results in an arguably more natural transition.

Yet a linear blending from black to white would actually, perceptually, feel too light -- exactly what Fig. 1 looks like -- the whole point is that a black-to-white gradient looks more even if calculated in sRGB, and not linearly.

So for gradients intended to look good to human eyes, or more specifically that change at a perceptually constant rate, what is the right algorithm when color is taken into account?

I wonder if relying just on gamma (which maps only brightness) is not enough, but whether there are equivalent curves for hue and saturation? For example, looking at any circular HSV color picker, we've very sensitive to changes around blue, and much less so around green -- is there an equivalent perceptual "gamma" for hue? Should we take that into an account for even better gradients, and calculate gradients as linear transitions in HSV rather than RGB?

2
jacobolus 1 hour ago 1 reply      
One thing I hate is that essentially all vector graphics and text rendering (Cairo, Quartz, MS Windows, Adobe apps, ...) is done with gamma-oblivious antialiasing, which means that apparent stroke width / text color changes as you scale text up or down.

This is why if you render vector graphics to a raster image at high resolution and then scale the image down (using high quality resampling), you get something that looks substantially thinner/lighter than a vector render.

This causes all kinds of problems with accurately rendering very detailed vector images full of fine lines and detailed patterns (e.g. zoomed out maps). It also breaks WYSIWYG between high-resolution printing and screen renders. (It doesnt help that the antialiasing in common vector graphics / text renderers are also fairly inaccurate in general for detailed shapes, leading to weird seams etc.)

But nobody can afford to fix their gamma handling code for on-screen rendering, because all the screen fonts we use were designed with the assumption of wrong gamma treatment, which means most text will look too thin after the change.

* * *

To see a prototype of a better vector graphics implementation than anything in current production, and some nice demo images of how broken current implementations are when they hit complicated graphics, check this 2014 paper: http://w3.impa.br/~diego/projects/GanEtAl14/

3
cscheid 3 hours ago 2 replies      
Hey, so gamma is not a logarithmic response. You claim that the delta you use in Figure 2 is a ratio, but your code, https://github.com/johnnovak/johnnovak.site/blob/master/blog... uses a fixed power. These are not the same thing.

f(x+eps)/f(x) ~= eps f'(x)/f(x) + 1

f(x) = x^2.2f'(x) = 2.2x^1.2

f(x+eps)/f(x) ~= 1.2 eps/x + 1

Human response to light is not particularly well-modeled by a logarithmic response. It's --- no big surprise --- better modeled by a power law.

This stuff is confusing because there's two perceptual "laws" that people like to cite: Fechner-Weber, and Stephens's. Fechner-Weber is logarithmic; Stephens's is a generalized power-law response.

4
skierscott 49 minutes ago 0 replies      
I work on algorithms that can be applied to images, and was equally surprised when I saw a video called "Computer color is broken."

I investigated and wrote a post called "Computer color is only kinda broken"[1].

This post includes visuals and investigates mixing two colors together in different colorspaces.

[1]:http://scottsievert.com/blog/2015/04/23/image-sqrt/

5
Negitivefrags 2 hours ago 1 reply      
Something that is important to note is that in photoshop the default is gamma incorrect blending.

If you work on game textures, and especially for effects like particles, it's important that you change the photoshop option to use gamma correct alpha blending. If you don't, you will get inconsistent results between your game engine and what you author in photoshop.

This isn't as important for normal image editing because the resulting image is just being viewed directly and you just edit until it looks right.

6
ansgri 2 hours ago 0 replies      
Enough has been said about incorrect gamma (this and [0]), now I think it's high time to bash the software of the world for incorrect downscaling (e.g. [1]). It has much more visible effects, and has real consequences for computer vision algorithms.

In the course on computer vision in my university (which I help teaching) we teach this stuff to make students understand physics, but at the end of the lecture I'd always note that for vision it's largely irrelevant and isn't worth the cycles to convert image to the linear scale.

[0] http://www.4p8.com/eric.brasseur/gamma.html

[1] http://photo.stackexchange.com/questions/53820/why-do-photos...

7
elihu 1 hour ago 1 reply      
This is very good and useful; I'll have to update my ray-tracer accordingly.

One thing not discussed though is what to do about values that don't fit in the zero-to-one range? In 3-D rendering, there is no maximum intensity of light, so what's the ideal strategy to truncate to the needed range?

8
mxfh 1 hour ago 0 replies      
Good reminder about these persisting blending issues in the linear interpretation of RGB values, which was well explained to non-coders as well here in a quite popular MinutePhysics video:https://www.youtube.com/watch?v=LKnqECcg6Gw

As others commented the gamma scaling issues seem even more relevant.

Just please, don't use RGB color space for generating gradients. In fact, it's ill fitted for most operations concerning the perception of colors as is.

chroma.js: https://vis4.net/blog/posts/mastering-multi-hued-color-scale...

D3: https://bl.ocks.org/mbostock/3014589

Interesting excursion: historically the default viewing gammas seem to have lowered, because broadcasting defaulted to dimly lit rooms, while today's ubiquitous displays are usually in brighter environments.

https://www.w3.org/Graphics/Color/sRGB.html

9
Retric 22 minutes ago 0 replies      
Did anyone else think the first set of bars was linear not the second? I could not notice any difference between the leftmost three bars on the bottom section. Or does this relate to how iPad renders images or something?
10
Glyptodon 3 hours ago 2 replies      
The thing that seems a bit weird to me is that the constant light intensity graduation (fig 1) appears much more even/linearly monotonic to me than the perceptual one (fig 2) which seems really off at the ends, kind of sticking to really really dark black for too long at the left end, shifting to white too fast at the right end.
11
kazinator 41 minutes ago 0 replies      
I was going to comment snarkily: "Really? Every coder? What if you program toasters?"

Then it immediately occurred to me that a toaster has some binary enumeration of the blackness level of the toast, like from 0 to 15, and this corresponds to a non-linear way to the actual darkness: i.e. yep, you have to know something about gamma.

12
reduxive 32 minutes ago 0 replies      
This article could really benefit from an image DIFF widget. Even animated flashing GIF images would be an improvement.

It needs something that not only permits comparable overlays, but (perhaps with a third diff layer) also highlights the ugly/wrong pixels with a high-contrast paint.

A handful of images are only somewhat obviously problematic, but for most of the images, I really had to struggle to find undesirable artifacts.

If it's that difficult to discern inconsistent image artifacts, one can understand why so little attention is often paid to this situation.

13
nichochar 16 minutes ago 0 replies      
The design of your website, and it's readability, is great! Good job
14
panic 3 hours ago 1 reply      
Nowadays GPUs are able to convert between sRGB and linear automatically when reading and writing textures. There's no more excuse for incorrect rendering on modern hardware!
15
slacka 1 hour ago 0 replies      
> The graphics libraries of my operating system handle gamma correctly. (Only if your operating system is Mac OS X 10.6 or higher)

Not just OS X. The majority of Linux games from the past 2 decades including all SDL and id tech 1-3 games relied on X server's gamma function. An X.Org Server update broken it about 6 years ago. It was fixed a few weeks ago.

https://bugs.freedesktop.org/show_bug.cgi?id=27222

16
qwertyuiop924 2 hours ago 0 replies      
I'd already seen most of this in a video (courtesy of Henry, aka MinutePhysics, https://m.youtube.com/watch?v=LKnqECcg6Gw), but it was nice to see a programmer-oriented explanation, nonetheless.
17
j2kun 2 hours ago 1 reply      
Is this why my computer screen's brightness controls always seem to have a huge jump between the lowest two settings (off and dimmest-before-off)?
18
willvarfar 3 hours ago 4 replies      
I'm divided; I really want the article to be true, and for everyone to realise what a whole mistake we've been making all along... but, as the legions of us who don't adjust for gamma demonstrate, ignoring it doesn't make the world end?!
19
kevinwang 2 hours ago 4 replies      
On my iPhone, for the checkerboard resizing, the srgb-space resizing (b) is almost an exact match, while C appears much whiter.
20
jadbox 3 hours ago 1 reply      
Interesting that Nim (lang) is used in the examples- good readable code too
21
wfunction 2 hours ago 0 replies      
Can anyone get IrfanView to output the correct image? I'm trying the latest version I can find and it still gives me full gray.
22
twothamendment 2 hours ago 0 replies      
I know and love my gamma. She makes the best cookies!
23
platz 1 hour ago 0 replies      
please dont use condescending titles barking what all coders should or shouldnt know (invariably the topic is a niche that the author wants to cajole others into caring about too)
3
How Norway spends its $882B global fund economist.com
143 points by punnerud  4 hours ago   60 comments top 5
1
kristofferR 3 hours ago 4 replies      
"It is run frugally and transparently" is a dubious claim, at least according to claims made on NRKs Folkeopplysningen (a show like Penn and Teller: Bullshit, just better).

The fund spends a lot on being actively managed, one manager received ~$60 million in bonuses in 2010. However, they won't reply when people ask if bonuses are actually financially beneficial.

https://tv.nrk.no/serie/folkeopplysningen/KMTE50009215/seson... @ 28:30

2
cs702 3 hours ago 1 reply      
A little over decade ago, when Norway's fund was called "the Petroleum Fund" and had "only" $147B, an article in Slate magazine explained what was special about it:

"Norway has pursued a classically Scandinavian solution. It has viewed oil revenues as a temporary, collectively owned windfall that, instead of spurring consumption today, can be used to insulate the country from the storms of the global economy and provide a thick, goose-down cushion for the distant day when the oil wells run dry."[1]

Since then, the fund has grown six-fold.

[1] http://www.slate.com/articles/business/moneybox/2004/10/avoi...

3
harryh 4 hours ago 3 replies      
882 B / 5.2 Million ~= $170k for every citizen of Norway.

At 4% a year that's $6,800 each in annual income. Not bad!

4
netcan 3 hours ago 1 reply      
Norway's oil money story is one of the weirdest. Are there any examples in history where a country has saved up such a big stash? Are they planning to retire young, as a nation?
5
Lythimus 3 hours ago 1 reply      
Is there an index or ETF which follows this pension fund's investments?
4
Zuckerberg and Chan aim to tackle all disease by 2100 bbc.com
122 points by timoth  4 hours ago   187 comments top 35
1
kendallpark 3 hours ago 6 replies      
I'm glad they're putting money into medical research, but I kinda roll my eyes when people make big claims about curing X, especially when X is something incredibly broad like "cancer" or in this case, "all diseases." AI/ML has barely scratched the surface of its potential in medicine, however, I find it naive to think that you can throw AI/ML at any random disease and always get a cure. Even after a century. Will we have a cure for trisomy 21? For antisocial personality disorder? For obesity and addiction? These things are far more complicated than just creating the right drug.

But as much as I'm rolling my eyes at their blanket statement, the spirit of "yes we can!" does way more for science and progress than naysay of critics.

2
Animats 3 hours ago 3 replies      
Probably not feasible without some genetic re-engineering to design out vulnerabilities to common diseases. That's how aging will probably be solved. There's a big debug time problem, though. It takes about two generations to be sure you got it right. We'll probably have very long lived mice decades before it works for humans. (Many cancers in mice can be cured now. This doesn't translate to humans.)

Then there will be species conflicts. Merck people won't be able to mate with Novartis people because they'll be too different genetically.

3
mmaunder 3 hours ago 2 replies      
4
idlewords 4 hours ago 2 replies      
Pharmacology is an interesting example where better technology and scientific understanding have made things worse than earlier, low-tech methods ("inject plant extracts into animals and see what happens").

The number of new drugs discovered per dollar of research has been dropping since 1960, and obvious explanations (like "the easy ones have already been found") turn out not to explain the phenomenon. [http://www.nature.com/nrd/journal/v11/n3/fig_tab/nrd3681_F1....]

This is something we should try to understand better, since it goes against the intuition that technology is an unalloyed good in scientific research.

I applaud the money they're spending, but the level of technophilia in the announcement gives me pause.

5
calsy 2 hours ago 1 reply      
I know it's more of a western problem, but it would be good if someone could find some real solutions to the obesity crisis.

If you ask many GPs in the west they will tell you that a majority of illnesses they address are related to weight and diet. With an aging, increasingly overweight adult population alongside a sharp rise in child obesity come big consequences for health care over the next half century.

Im not saying its an easy task you can just throw money at, but if trends continue as they are now the health and economic impact on society will be huge.

Bar some sort of national catastrophe (for e.g. war, famine, disease) is it a crazy idea to think we could see a reduction in obesity levels? Are we simply resigned to the fact that we will just get bigger and bigger in the future?

6
a13n 6 hours ago 3 replies      
"Mark Zuckerberg and Priscilla Chan announce $3 billion initiative to cure all diseases"

http://venturebeat.com/2016/09/21/mark-zuckerberg-and-prisci...

7
ryandrake 5 hours ago 3 replies      
Not to take the wind out of anyone's sails, but there is a concern to be raised about relying more and more on charities to fund the public good. When a democratically elected government funds the public good, at least in theory, the public at large has a small say in choosing what counts as "public good". When you leave it to charity, you're relying on the morals of individual wealthy donors to decide what counts as a "public good". I don't claim to know which method is more risky in terms of mis-allocation of resources, but it's something to think about.
8
sams99 2 hours ago 0 replies      
The absolutely crazy thing is that in this day and age the vast majority of diagnosed paediatric cancers are not sequenced (no germline, tumor or RNA seq) It actually makes me feel quite sick that there is so much information out there that is not being mined or analyzed.

I hope this money does not go into some sort of "in 100 years from now moonshot" as opposed to, we have huge urgent needs for money right now.

9
karmicthreat 58 minutes ago 0 replies      
So lets say this were possible. Where would be the best place to throw this money? Just researching X disease one by one isn't going to be successful. There are not enough resources available to make that happen that way. Period.

So what if a big leap in computational biology happened? Making faster machines is relatively easier and largely unregulated.

So you focus on simulating disease and some form of automation that tries to cure it. We have the problem of building these models for the computer to crunch on. So why not build them from people. Continuously monitor everything about someone. DNA, the various omics, self reports. All the while machine learning is trying learn these models. So other automations can change them.

So the first thing we need is a way to collect all this data. Itself a major medical breakthrough. How much data do we need to build the models? This seems to be the first breakthrough we need to even approach this.

10
tschellenbach 5 hours ago 2 replies      
This is great. Great achievements as a startup founder and now philanthropy.

For those comparing this to pharma companies. Pharma companies invest in drugs that they can make money with. It sounds as though this $3 billion is aiming at more general research and making it publicly available.

11
pkaye 2 hours ago 0 replies      
I just feel this is too broad and unbounded. They should have focused on a specific diseases and a shorter timeframe. By 2100 most of us will be long dead by current standards.
12
ravenstine 56 minutes ago 0 replies      
All disease? I hope they mean specifically pathogens, because there are plenty of diseases that are caused by other things or have unknown causes, and you can throw ten times the net worth of Facebook at them and they probably wouldn't be cured noticeably sooner. If it's just pathogens, I could believe that if we programmed nanoprobes that could target them, making antibiotics/antivirals permanently obsolete.
13
hmate9 3 hours ago 6 replies      
Not sure why there is a lot of negativity here. It doesn't seem that ridiculous of a goal. I think by 2100 we will have extremely powerful AI that will make fighting diseases extremely easy compared to methods available today. To be honest, it seems like an achievable goal.

I wish Zuckerberg and Chan the best in this.

14
supergirl 3 hours ago 0 replies      
any money spent on research is good but,

1. governments around the world probably spend hundreds of billions yearly on research in medicine alone. and zuckerberg wants to solve everything with his 3bn?

2. our current technology is not even close to good enough to make the kind of major breakthroughs needed to say we 'cured cancer'. for example, the biggest neural networks we trained have the order of 10bn parameters while the human brain has 100bn neurons, each, I'm guessing, having at least 10 parameters. similarly for very small scale technology.I think we need to tone down a bit the hype in AI and computing.

15
danielmorozoff 3 hours ago 0 replies      
In this vein, Some very interesting work is being done by the church group at Harvard to encode cells to withstand all viral infections at the genetic level.

https://www.newscientist.com/article/2101657-synthetic-super...

16
drcross 3 hours ago 2 replies      
EDIT: JWZ disagrees that this is all rainbows and puppydogs.

Archive.org link because JWZ dislikes HN- https://web.archive.org/web/20160818144913/https://www.jwz.o...

17
2pointsomone 5 hours ago 0 replies      
Feeling so privileged right now; what a great time to be alive and see such visionary leadership. Thank you Mark, Priscilla, Bill, and the thousands of people whose names I don't know who work tirelessly on these problems.
18
yazaddaruvala 2 hours ago 0 replies      
Including aging as a disease?
19
helthanatos 3 hours ago 0 replies      
New diseases will come to be. Possibly, the cures will cause the new diseases, possibly something else, but all disease won't be conquered.Does this ambitious projection include only disease or does it also include disorder? I think it would be cool to cure disorder before disease.
20
snappy173 2 hours ago 0 replies      
this is marginally better than the Bluth family's fundraising efforts to cure TBA ...
21
unknown_apostle 3 hours ago 1 reply      
The commitment is very big, my comment is very small.

Whatever exists needs to be challenged continuously to keep existing. Any naive attempt to suppress all adversity forever will backfire.

22
languagewars 2 hours ago 0 replies      
I'm sick of this contrarian disruptive nonsense.

Redd Foxx got to choose his fate, but what am I going to do while forced to sit around the hospital dying of nothing?

Find a socially acceptable alternative to disease before you eliminate it.

(Ok, to put it more clearly: get off my damn lawn and my damn planet you stupid non-exponential function understanding kids.. Please!)

23
chris_wot 1 hour ago 0 replies      
I'd like to see their efforts in stopping the spread of the most rapidly increasing disease to challenge humanity in any man's lifetime.

It's called Facebook.

24
M_Grey 3 hours ago 1 reply      
I plan to conquer the world and all of its inhabitants... long after all of us here are dead or senescent.

I guess it is easy to make empty statements if you make them apply to a far enough future. Mars colonies from Musk (still working on getting the colonists there in one piece of course), and all disease tackled!*

*With $3Bn

25
troels 3 hours ago 0 replies      
How does one even define "disease"?
26
vegabook 5 hours ago 3 replies      
Very nice. Some context:

Mark Zuckerberg is worth 55 billion dollars. This is 5% of his net worth.

Mark Zuckerberg spent 20bn on Whatsapp. At his 28% shareholding in facebook that's a 5.6bn USD personal commitment.

The top 5 global pharma companies spent 42 billion USD on R&D in 2015 alone. Total pharma sector R&D is circa 200 yards. Every singe year. They aren't anywhere near "curing all diseases". This intiative would fund them for 5 days.

Very generous, but let's keep some perspective.

27
karmicthreat 3 hours ago 0 replies      
By 2100 I hope we have a patchable structures that just need an electronic update to generate the new immune cell/protein/expression.
28
grownseed 2 hours ago 2 replies      
These acts of apparent philanthropy from ridiculously wealthy people rub me the wrong way. It feels like those rich patrons from older times who would bestow their "generosity" as they pleased, except that in modern times most countries have frameworks in place to make this sort of work happen, and these rich people choose to ignore them, or worse, disparage them.

Companies like Facebook (and people like Mark Zuckerberg) actively avoid paying taxes whenever they can, in a lot of countries that, for example, have public healthcare and other public institutions that would normally benefit from these taxes.

It's a bit like repeatedly stealing some kid's lunch, and then making fun of the kid for her weakness while appearing strong (and stronger in comparison to the weak kid) and compassionate when the kid passes out and you carry her on your back.

29
whybroke 2 hours ago 0 replies      
This is lovely and all.

But he is in a much better position to work on the curious problems of ever increasing political polarization in our new Post-Factual world.

If I were to guess, over the next century that problem is going to result is a vastly more misery than a slight speed up to medical technology could compensate for.

30
johansch 4 hours ago 1 reply      
Hubris, much?

Yes, this is a commendable effort, but I don't think they have the smarts/money for this. Even at an investor/patron level.

31
aestetix 3 hours ago 1 reply      
Step 1: shut down Facebook.
32
meira 4 hours ago 1 reply      
Charity with evaded money is very evil.
33
limeyy 5 hours ago 7 replies      
I wonder why, all these billionaires first want to make billions, and then do philanthropy. How about making their services/business/products more affordable with which they are making all this money with in the first place?

For example: MS Office used to cost 4-500 euro for the average home user a few years ago. That was ridiculous.

If you have a small shop and 2000 Facebook page likes, Facebook rips you off each time you want to reach them.

Maybe market dictates these prices but then again, they would be in the position to dictate the prices in the first place.

34
lostmsu 5 hours ago 0 replies      
I really wonder why those techy multibillionaires invest in medicine rather than techy stuff. I'd rather like to see fusion and hardware research done. Speaking globally, that might just bump global economy so significantly, the illnesses would go down just because more people would be able to afford education and medical care easily.
35
nenadg 6 hours ago 4 replies      
Oh that's just great, another billionaire philanthropist curing all diseases. Like Gates did.

I don't know whether their passion loses momentum inside 'initiative/fund' or it was doomed to be opposite of it's cause from the start?

5
Hedge-Fund Son Thought Hedge-Fund Dad's Trades Were Pretty Fishy bloomberg.com
51 points by clbrook  2 hours ago   4 comments top 2
1
hkmurakami 1 hour ago 0 replies      
Levine's opening paragraphs are always so strong. The content marketers within us all would be wise to learn from his methods.
2
savanaly 1 hour ago 2 replies      
How did I know it was going to be Matt Levine just by the title?
7
Google backtracks on privacy promise with messaging service Allo techcrunch.com
12 points by Lordarminius  1 hour ago   1 comment top
1
the_common_man 2 minutes ago 0 replies      
Facetime was also meant to be an open protocol. I think we just have to accept that big corporations can say whatever and do whatever and get away with it. Because we let them get away with it. Convenience trumps everything.
8
License now displayed on repository overview github.com
85 points by joeyespo  4 hours ago   12 comments top 8
1
CydeWeys 1 hour ago 1 reply      
This is really cool. One little detail I've noticed is that it doesn't seem to apply to private repositories, which seems like a bit of an oversight. The repo I'm checking clearly having an Apache 2.0 license text in the LICENSE file. Just because a particular copy of a repo is private doesn't mean that the code within it isn't still bound by an open source license.

EDIT: Before anyone replies with "Why would you want that?", it's fairly common to stage a project privately on GitHub before publicly releasing it. It'd be nice to see that the license is detected correctly before going public with it. As it is now, I don't know what'll happen until I go public, and my first public commit in the repo may well be fixing up something minor to get the license detected properly.

2
infodroid 1 hour ago 1 reply      
...but it is only visible if you are logged in. That inconsistency is a little weird.
3
_ph_ 2 hours ago 0 replies      
A nice small enhancement. While many projects mention the license in the README, not all do. So it is very easy to see at a glance which license a project is under. In this context, I really appreciate how github offers to add the license while creating a new repository with a quick list of the most common licenses. A good incentive to put a new project under a proper license from the start and making sure that the correct license terms are attached.
4
libeclipse 39 minutes ago 0 replies      
I've noticed that it doesn't distinguish between the different versions of the CC licence. It just puts them all under CC BY 4.0
5
eriknstr 2 hours ago 0 replies      
I noticed this a few days ago and I think it's a great addition. I hope they keep this feature.
6
pcl 2 hours ago 0 replies      
This is great! I often find myself clicking through to the license file and trying to remember which license text it looks like. This'll be both easier and more reliable.
7
brink 2 hours ago 0 replies      
Neat.
8
jdubs 2 hours ago 2 replies      
Interesting that they're devoting screen real estate to something that can be found in the data section.
9
Analysis of chronic fatigue syndrome study casts doubt on published results statnews.com
45 points by rch  2 hours ago   5 comments top 3
1
philipkglass 1 hour ago 1 reply      
I had a sibling who struggled with CFS for years. Even before this bad study came out the usual reaction from doctors was you're suffering from mental illness/try exercising/you're faking. It sounds almost like this study was constructed for the convenience of doctors so they could point to the publication and keep suggesting what they had already suggested.

It's hard living with a debilitating medical condition that doesn't have good treatments or a clear cause. It's even harder when the doctor says "I think it's all in your head" instead of "sorry, we really don't know how to treat this yet." That sort of consistent dismissal/borderline victim-blaming from real doctors is what I think pushed my mom toward bogus alternative health practices. There appears to be nothing medically valid about chiropractors but at least they don't call you crazy just for telling them about the experiences you've been having.

2
j374 1 hour ago 1 reply      
As a near 10-year sufferer of severe ME/CFS, I can confirm this study has done unbelievable damage to the cause and should absolutely be retracted in full. Recent research has identified unquestionable, tangible and severe abnormalities in immune and metabolic function that are unique and consistent to ME/CFS sufferers, and this is no longer realistically up for debate. Most doctors though still dismiss the condition as psychological, and because of this (and even though the burden has been shown worse than other immune disorders like AIDS in many cases) funding is virtually non-existent. This study has been so widely disseminated and taught to medical professionals, the damage will take some time to undo unfortunately.
3
lutusp 48 minutes ago 0 replies      
Apart from the article's importance in exposing some bad science, its author is skilled in narrative, in making an essay readable. A worthwhile read.
10
What are Bloom filters? (2015) medium.com
67 points by diggan  4 hours ago   11 comments top 6
1
dmlittle 1 hour ago 1 reply      
Bloom filters are great, but unfortunately don't support deletions of items as a particular key might be being used by more than element in the filter and you don't know if it's same to delete. A few weeks ago, someone posted about Cuckoo Filters [1] which are like Bloom Filters but allow for key deletion.

[1]: https://www.cs.cmu.edu/~dga/papers/cuckoo-conext2014.pdf

2
lanna 14 minutes ago 0 replies      
I really liked Daniel Spiewak article about it: http://www.codecommit.com/blog/scala/bloom-filters-in-scala
3
cwisecarver 1 hour ago 1 reply      
A bit long winded, but a good, thoughtful explanation. I honestly didn't know what they were, only how they could be used. Now I know.
4
ianleeclark 2 hours ago 1 reply      
Bloom filters are a really interesting data structure. Whenever I found out about them I really wanted to use them, so I ended up building a distributed hash table. I thought bloom filters could be an interesting way to optimize key lookup value throughout the network, as I could use a bloom filter to state that a remote node definitely didn't have a key (therefore, no need to send a request to them) or if a node likely had a key (therefore, sending a request to get the key's value).

Naturally, if consensus were established between nodes, using something like this would be unnecessary, but it turned out to be an interesting way of optimizing lookups in a DHT.

5
ipunchghosts 3 hours ago 1 reply      
How many times on HN must this topic come up?
11
Teaching Concurrency (2009) [pdf] microsoft.com
64 points by luisfoliv  4 hours ago   8 comments top 3
1
Const-me 1 hour ago 0 replies      
When I was a kid, I loved playing transport tycoon deluxe videogame.

When I grew up to be a programmer, never had much problems with concurrent stuff.

IMO designing concurrent programs is conceptually similar to building complex high-throughput low latency railway networks in the game.

2
TickleSteve 2 hours ago 3 replies      
I think the first thing people should be taught about concurrency... is when not to use it.

Concurrency can result in increased maintenance costs and complexity.

Concurrency is also not more efficient on a single core.

Concurrency can help with latency and response time.

In embedded systems in particular, there is an over-use of concurrency which often results in bloated, complex code.

3
argv_empty 3 hours ago 1 reply      
The only actionable advice in here for someone tasked with developing a curriculum for teaching concurrency is to make sure the prerequisite courses instill the idea of computation as a sequence of state transitions.
12
Humans Spread from Africa in One Wave, DNA Shows nytimes.com
31 points by andyraskin  3 hours ago   8 comments top 2
1
hownottowrite 16 minutes ago 0 replies      
The commentary referenced in the article is far better than the article itself: http://bit.ly/2cWOXmj

[0] short url provided for insanely long paywall avoiding link in the nyt OP

2
onetime20160931 1 hour ago 3 replies      
I can't read the full article. Even with all my cookies cleared it takes me to a subscription page. Same with web search. What's up with this?
13
LL and LR Parsing Demystified (2013) reverberate.org
60 points by bleakgadfly  3 hours ago   6 comments top
1
haberman 2 hours ago 1 reply      
Happy to see this show up here again. It's one of my articles that I'm most proud of.

I'm happy to answer any questions about it.

14
Researchers quantum teleport particle of light six kilometres sciencebulletin.org
330 points by upen  10 hours ago   196 comments top 21
1
TheRealPomax 8 hours ago 10 replies      
Not quite as impressive when you know, and if you don't you should before opening this link, that "quantum teleportation" is not teleportation. It was a cool-sounding name at the time, but has nothing to do with sci-fi style teleportation: nothing disappears from one place and then shows up in another.

"Quantum teleportation" is a process of information duplication using particles that already exist, and have been positioned such that there is enough distance between them that we can rule out direct interaction between them (that we know of given the current state of physics). Quantum teleportation is the process by which we then manipulate only the particles on one side of the distance divide, such that particles on the other side "end up" reflecting the same state that the ones we manipulated were.

Although "ending up" is probably the wrong term, because we use particles in special states of which we already know they're entangled, then split them up (which does not cancel entanglement) and then we make use of their entanglement property: running an algorithm involving particles on one side should yield the exact same result as running the same algorithm on the other side, so a much more interesting algorithm is one that you run on one side in one way, and on the other in a different way, to effect a "data copy" without ever actually copying data (and very much without any kind of teleportation. The fact that you run your process with "the same particle" is the special part. Being able to even have two particles that are literally the same is a pretty bizarre bit of physics)

2
jdmichal 10 hours ago 5 replies      
I posted this in a duplicate thread:

For those thinking that this is a step towards faster-than-light (FTL) communication: As far as I know it's fairly certain that quantum entanglement will not allow for FTL communication. Basic principle is that while measurements between both sides will be correlated, it's not possible to tell how they are correlated until both sides compare measurements.

https://en.wikipedia.org/wiki/Superluminal_communication#Qua...

http://physics.stackexchange.com/a/203893

Given that, it seems like the touted benefit of using quantum entanglement here is in securing communications, since your measurements will no longer correlate if a third party is also measuring? At least, that's what I gathered.

3
phkahler 10 hours ago 3 replies      
>> Researchers teleport particle of light six kilometres

I'm sure they did nothing of the sort. At best they transferred an unknown state of a photon to another photon six kilometers away, then confirmed via measuring both.

4
Someone1234 10 hours ago 2 replies      
Just to give a concept of how difficult this is:

> The challenge was to keep the photons arrival time synchronized to within 10 pico-seconds,

> Since these detectors only work at temperatures less than one degree above absolute zero the equipment also included a compact cryostat, said Tittel.

The dark fiber seems like it was important for synchronizing the clocks. And while they claim this could be used for encryption keys, that is really a roundabout way of saying that very little information was actually transmitted/received, although the article doesn't say exactly how little.

If this technology was refined, you'd just use this system to send secure messages without the need for an encryption key.

5
cryptarch 6 hours ago 1 reply      
I don't think I understand it right.

My understanding is this:

1. send entangled bits to two separate locations A and B

2. determine their states (which will always be exactly opposite) on both sides

3. send data using classical means from A to B, xor'd with the quantum measurements

4. decrypt the data at B, with: data ^ quantum_measurements ^ 111111 (the last step being "invert all bits and ^ representing xor).

If that's it, how is this much better than sending block of identical entropy to A and B, and using a OTP? Theoretial tamper-proofness?

6
tempestn 2 hours ago 0 replies      
There appears to be a great deal of conflation in many of these comments between quantum entanglement and quantum teleportation. Quantum teleportation makes use of entanglement, but is a different thing entirely.

In short, quantum entanglement is an effect that causes two quantum particles to share state instantaneously over arbitrary distances. It can not be used to transmit information faster than the speed of light, essentially because while it is possible to manipulate the particle at one end, it is not possible to arbitrarily set it to a chosen state (and as described fully in the no communication theorem)[1].

Quantum teleportation is a way to transmit quantum information, ie the quantum state of a 'qubit', using both quantum entanglement and a classical communication channel. Because classical communication is required, no faster than light communication is possible. However, quantum teleportation is necessary if you want to transmit quantum information.

To very briefly sum up how it works, you start with a qubit whose state you want to transmit, along with two entangled particles, and a 'receiving' qubit that will receive the state of the sending qubit. Through an interaction between the sending qubit and the entangled particle on the sending side, the quantum state of the entangled particles is set to one of four possibilities. Which of the four possibilities resulted is sent via the classical communication channel from sending to receiving end. The receiving end then uses that information, along with the receiving-end entangled particle, to manipulate the receiving qubit into the identical state as the sending qubit, thereby 'teleporting' that state from sending to receiving end. The Wikipedia article has a more thorough layman's description, as well as the underlying math[2].

Caveat: I'm an engineer, not a physicist, so I may have made a mistake here as well, but the main take-away is that quantum teleportation is not the same thing as quantum entanglement, and its purpose is not FTL communication, but rather communication of quantum states.

[1] https://en.wikipedia.org/wiki/No-communication_theorem[2] https://en.wikipedia.org/wiki/Quantum_teleportation

7
d_theorist 9 hours ago 4 replies      
I had thought that quantum entanglement could not be used for communication because the state that is transmitted is random, and cannot be controlled at either end.

But, there are quotes in this article to the contrary. e.g.:

Such a network will enable secure communication without having to worry about eavesdropping, and allow distant quantum computers to connect, says Tittel.

Was my understanding mistaken?

8
kraftman 10 hours ago 3 replies      
>> Dark fibre, so named because of its composition a single optical cable with no electronics or network equipment on the alignment doesnt interfere with quantum technology

Isn't dark fibre either unlit capacity or leased fibres?

9
skywhopper 8 hours ago 2 replies      
Not satisfied with asking me to subscribe to their newsletter before I've even read one article, this site _also_ popped up a request that I allow them to send me desktop notifications. Do people actually fill these things out? Why does ScienceBulletin think I might even want these things?
10
b3lvedere 10 hours ago 1 reply      
"if youre a photon, you might want to keep reading."

So, i stopped reading :)

11
mechazawa 6 hours ago 2 replies      
Forgive my ignorance as I'm a software engineer and not a physicist. But does this means that in the future FTL communication will be possible. Every time I work with networking one thing always springs to mind. That is that the speed of light is a limiting factor. Will this mean that in the future latency issues will be a thing of the past?

http://www.stuartcheshire.org/rants/latency.html

12
cody3222 47 minutes ago 0 replies      
The Uber killer
13
sebringj 7 hours ago 1 reply      
Does modifying the state of one entangled particle then destroy the link between them? If it does, then how can you continually communicate without distance being a factor as you'll have to resupply? I hope it doesn't or there is some work around or I simply am completely asking the wrong question.
14
3chelon 10 hours ago 1 reply      
10 picoseconds is "one millionth of one millionth of a second"? Damn it, I must have been misunderstanding engineering units all my life.
15
atombath 10 hours ago 1 reply      
Awesome. So a quantum network would have the photon act as a data stream, in a way it would act like a physical security key. So with a lot of work, this would be amazing for 1:1 communications. But what if I want to send my dick pick to hundreds of people at once(1:100 distribution)? Is it possible to entangle one photon with multiple?
16
jcoffland 10 hours ago 3 replies      
Teleportation is not possible. Quantum teleportation is the transmission of the exact state of a photon from one place to another. Still, even this happens at less than the speed of light. Otherwise it would violate causality.

The media plays fast and loose with terminology either out of plain ignorance or the desire to sell a story.

17
fillskills 9 hours ago 1 reply      
This might sound like a very naive question but I am curious as to how the data was actually transmitted.
18
itissid 10 hours ago 3 replies      
Correct me if this sounds noobish. So they have teleported state that has Information that can be decoded to bits and bytes(?) Forget about human teleportation for a sec, isn't this a big deal in a not so far future for telecommunication?
19
antouank 9 hours ago 0 replies      
Similar story from yesterday (BBC) https://news.ycombinator.com/item?id=12539375
20
reduxive 6 hours ago 0 replies      
Click bait. Deliberate use of hotword "teleport," to exploit bikeshedding debates.

Cue endless discussions such as: What does teleport REALLY mean?

21
Rooster61 10 hours ago 3 replies      
The title of the article, while probably not intentional clickbait, is incorrect. Only the quantum state of the photon was "teleported", not the photon itself. The original photon was in fact destroyed, and only the information about its state was transmitted.
15
Tor Browser Exposed: Anti-Privacy Implantation at Mass Scale hackernoon.com
81 points by Jerry2  4 hours ago   34 comments top 8
1
FiloSottile 2 hours ago 1 reply      
This piece is charged with the personal bias of the author (https://twitter.com/movrcx) who launched an hostile fork of the Tor Browser Bundle because "untrustworthyness".

I recommend you read this instead, which provides a more level-headed and technically correct analysis of the vulnerability (which was there, even if not properly in the terms described by OP):

https://hackernoon.com/postmortem-of-the-firefox-and-tor-cer...

2
lucastx 3 hours ago 2 replies      
From /r/TOR:

"Old news. This was fixed in 6.0.5.

https://blog.torproject.org/blog/tor-browser-605-released

Interesting note: The author is part of the rotor browser fork that is going no where so far. Doesn't look like the reported issue has been fixed there. In fact, no commits since before this blog post."

https://www.reddit.com/r/TOR/comments/53u1cd/tor_browser_exp...

3
necessity 3 hours ago 2 replies      
Yet again, TOR gets blamed for a Firefox vulnerability. Surprise, surprise...
4
nixos 3 hours ago 1 reply      
Using Tor may actually be less secure that using a normal browser.

At least when I connect to Microsoft, Google, Facebook, etc. I don't expect to get hit by a driveby JS exploit, and Google does help with "safe browsing".

With Tor, you're one HTTP website (or not HSTS website) away from a driveby virus, with no way to tell that you're connecting to a dangerous exit node

5
willvarfar 3 hours ago 2 replies      
The same MitM update attack can be leveled against all Firefox users, and not just Tor browser users?
6
Mizza 3 hours ago 5 replies      
Tor is not, nor has it ever been, trustworthy. Hell, you can still try active deanonymization for youself: https://github.com/Miserlou/Detour

This didn't used to be a problem, as it was essentially run as a sandbox project for the academic anonymity community. It was very up front about its capabilities and limitations.

Unfortunately, in recent years, the US government has been bankrolling more "privacy" software development through its propaganda arms (OTF, RFA, etc.), and the Snowden revelations have led private foundations to follow suit.

As such, the organization doubled down on rebranding to be a "human rights" _tool_, as this is what grant giving organizations love to promote (free speech in Iran, activist publishing, etc.) This combined with a overly-enthusiastic do-gooders gaining more and more prominence in the Tor organization has led to the dangerous situation of promoting inherently insecure software as a security solution to vulnerable people. This is a general problem in the scene (remember when those activists in South America got vanned for using CryptoCat?) - and one that I've been guilty of myself in the past.

I really hope the new boards steers them back to the academic realm and slaps a big red USE AT YOUR OWN RISK warning on the tin. Unfortunately, I think the opposite will happen.

7
mdadm 3 hours ago 1 reply      
>The entire security of the Tor Browser ecosystem relies on the integrity of a single TLS certificate that has already been previously compromised.

Seriously? That seems like a really weird - to say the least - decision to make about something this important...

8
rnhmjoj 3 hours ago 0 replies      
Is it really so easy to control a significant portion of tor exit nodes? I seem to remember there are automatic systems and members of the project checking for suspiscious nodes.
16
Bad science persists because poor methods are rewarded economist.com
54 points by feelthepain  4 hours ago   30 comments top 10
1
acscott 4 minutes ago 0 replies      
"Science" has now many definitions. My personal understanding of "science" is this:https://www.youtube.com/watch?v=EYPapE-3FRw
2
ChicagoBoy11 2 hours ago 2 replies      
The article reminded of me of Robert Axelrod's excellent "The Evolution of Cooperation." Axelrod uses similar computer modeling to create a tournament to help tease out how cooperative strategies emerge from primarily self-interested behavior. It is one of the best things I have ever read in my life, and I'd recommend that book to anyone.

The real issue with scientific publishing is that there is simply no penalty for publishing shoddy research. I know several academics who made quite a big name for themselves on research that was later partially or fully retracted. No one cared about that; there was no real reputational damage done. To tackle poor science, such "poor" scientific inquiry should be "punished" in some way. Similarly, it is terrible for the advancement of science that only novel or significant results get published -- there should be a way for researchers to benefit from publishing well-designed research which simply did not yield interesting results.

How to do that? I think some of Axelrod's tournament provides an answer. Like in his examples, the individual incentives align to yield a pretty poor outcome to the members of his population (he runs an iterated prisoner's dilemma game). However, correctly setting up the iteration parameter's, slowly a cooperative strategy becomes the evolutionarily stable strategy.

I can see how this could also be the case for academics. There is no law from up above that dictates that "number of papers published" is the ultimate metric of success. There is a culture, and processes, and institutions which have led that to be a leading indicator of academic success. If there were real motivation and impetus to change this, there is no reason to imagine that other metrics (and processes) could emerge that would much more highly value scientific integrity and thoroughness.

3
rwallace 1 hour ago 0 replies      
This is the one part of the article that is counterintuitive:

> Worryingly, poor methods still wonalbeit more slowly. This was true in even the most punitive version of the model, in which labs received a penalty 100 times the value of the original pay-off for a result that failed to replicate, and replication rates were high (half of all results were subject to replication efforts).

How can bad results still confer a net reward on their producers with a penalty like that?

4
taeric 3 hours ago 3 replies      
I would say it is less that poor methods are rewarded, and more that the investment in proper methods is expensive, and thus penalized.

That is, nobody is particular looking for people doing poor work and handing out rewards. However, the "proper" methodology that we want necessitates time. Something that is expensive and there are already plenty of things eating at the budgets of work out there.

I do think this can be made better. But I see no reason to think it is just us chiding people for not doing better.

5
bsder 1 hour ago 0 replies      
While I'd like to slag science for this, we also have a bit of counterexample in other fields.

In art, we have studies that show that those who produce more also produce better. And it compounds. There is no reason to believe that at least some of this isn't operant in science.

The real problem is the lack of positive incentives for negative results. And I don't know how you fix that.

6
bootload 2 hours ago 0 replies      
"... his finding also suggested some of the papers were actually reporting false positives, in other words noise that looked like data. He urged researchers to boost the power of their studies by increasing the number of subjects in their experiments."

Conclusions based on low sample rates, should be seen as poor technique, an indicator of potential bias and suggest a lot more validation is required before acceptance.

7
SilasX 32 minutes ago 0 replies      
We can generalize: for all X, bad X persists because some incentive structure makes it locally optimal for people to produce bad X.
8
martincmartin 3 hours ago 3 replies      
Would it help to move away from print journals, to online journals with comments sections so the community can discuss any potential flaws?

Publishing the data seems like it would help, although might not be possible where there are privacy concerns.

9
kayhi 3 hours ago 1 reply      
Poor methods which is a result of inadequate data/descriptions being provided by researchers to the publication (also an issue with publications not requiring it).

In many of the hard sciences there is not a requirement to list the products used such as chemical reagents or plasticware in a given experiment.

10
VikingCoder 3 hours ago 2 replies      
...20 researchers try to study something. But only one of them gets statistically significant results (p < 0.05), so they publish and the rest don't.

http://xkcd.com/882/

If we don't address this, then the whole research publication process is on fire.

17
Writing an OS in Rust: Returning from Exceptions phil-opp.com
105 points by phil-opp  6 hours ago   20 comments top 5
1
adamnemecek 4 hours ago 2 replies      
I think that now that we have a language suitable for writing kernels, we also need to have a discussion about what do we really want from a new os. Over the years there have been quite a few OSs and some of them were more advanced than the currently popular OSs. E.g. AS/400 didn't make the distinction between memory and hdd which seems to me like a good idea. BeOS has a fully async API which resulted in a much better user experience and better CPU utilization. Tandem had erlang like processes. None of the popular OSs have this. I'm not sure I want another UNIX implementation.

Furthermore, why are all the APIs so diverse? Why aren't there reactive operating systems (as in OS with reactive API)? All of these ideas can be explored in Rust but on some level I'm not sure what should be the feature set of the OS of the future.

The current driver models aren't that great either.

2
kibwen 4 hours ago 0 replies      
The amount of effort that Phil puts into these posts is really fantastic. Not only are they a great example of how to leverage Rust's abstractions to provide some level of safety in an unsafe domain, but they're also (IMO) approachable enough to appeal to people who have never worked at such a low level before, growing the population of systems programmers in the process. Keep it up! :)
3
amluto 2 hours ago 0 replies      
One minor sort-of-error:

> The iretq instruction is the one and only way to return from exceptions and is specifically designed for this purpose.

Not quite true. STI; LRET works too, and it's faster for stupid reasons.

Also, the AMD architects blew it badly here. That quote from the manual:

> IRET must be used to terminate the exception or interrupt handler associated with the exception.

Indicates that the architects didn't think about how multitasking works. Consider:

1. User process A goes to sleep using a system call (select, nanosleep, whatever) that uses the SYSCALL instruction.

2. The kernel does a context switch to process B.

3. B's time slice runs out. The kernel finds out about this due to an interrupt. The kernel switches back to process A.

4. The kernel returns to process A's user code using SYSRET.

This is an entirely ordinary sequence of events. But think about it from the CPU's perspective: the CPU entered the kernel in step 3 via an interrupt and returned in step 4 using SYSRET, which is not the same thing as IRETQ. Oh no!

It turns out that this actually causes a problem on AMD CPUs: SYSRET will screw up the hidden part of the SS descriptor, causing bizarre crashes. Go AMD.

Intel, fortunately, implemented SYSRET a bit differently and it works fine. Linux has a specific workaround for this design failure -- search for SYSRET_SS_ATTRS in the kernel source. I don't know how other kernels deal with it.

Of course, Intel made other absurd errors in their IA-32e design , but that's another story.

4
haberman 2 hours ago 1 reply      
A very interesting article. One thing that stood out to me:

> Unfortunately, Rust does not support [a save-all-registers calling convention]. It was proposed once, but did not get accepted for various reasons. The primary reason was that such calling conventions can be simulated by writing a naked wrapper function.

Followed by:

> However, auto-vectorization causes a problem for us: Most of the multimedia registers are caller-saved. [...] We dont use any multimedia registers explicitly, but the Rust compiler might auto-vectorize our code (including the exception handlers).

This seems like a pretty convincing argument in favor of supporting this calling convention explicitly: only Rust knows what registers it is actually using. The current approach devolves into preserving every register that Rust might possibly use.

AVX-512 has 2kb of registers alone! That's a lot of junk to save to the stack on the off-chance that Rust decides to super-auto-vectorize something.

5
Animats 4 hours ago 2 replies      
Hm. Is that code using the user's stack to handle an exception or interrupt? That's unsafe. If there's not enough user stack space (something the user can force), the kernel will get a double fault, usually a kernel panic condition. Normally, OSs above the DOS level switch to a kernel stack on an exception or interrupt.

There's hardware support to help with this; see "Task state segment" (16 and 32 bit x86 only, amd64 is different).

18
Laureline: discontinued open hardware/software GPS NTP server tindie.com
11 points by ashitlerferad  1 hour ago   1 comment top
1
privong 12 minutes ago 0 replies      
Interesting. I do like that it's open-source hardware.

Note that you can build your own, with a raspberry pi and a GPS add-on board that sends a 1 pulse-per-second signal through the GPIO pins. Sample instructions here:

http://www.satsignal.eu/ntp/Raspberry-Pi-NTP.html

And the GPS board can be purchased here:

https://store.uputronics.com/index.php?route=product/product...

19
Shaarli Personal, minimalist, database-free, bookmarking service github.com
48 points by dsr_  4 hours ago   21 comments top 7
1
luckman212 2 hours ago 1 reply      
Neat, but I'll stick with Pinboard. It's basically perfect.
2
jwebb99 2 hours ago 2 replies      
Strange. A few days ago, I announced my bookmarking service (http://paperbin.co) and it was barely noticed. Yet, Shaarli has 32 votes within an hour.
3
ge96 40 minutes ago 0 replies      
Question: how does it store data without a database. or does it not store data?
4
cocktailpeanuts 3 hours ago 1 reply      
Would be nice to hear how this is database-free. Is it a static site generator? Is it using files?
5
randomsofr 1 hour ago 0 replies      
The ui looks old.
6
campuscodi 3 hours ago 1 reply      
This must be the 3rd time I see this on HN in the last 5-6 years.
7
threepipeproblm 2 hours ago 0 replies      
I love the name.
20
Stanford Expert Explains Antibacterial Soap Ban stanford.edu
166 points by CapitalistCartr  9 hours ago   76 comments top 11
1
MOARDONGZPLZ 6 hours ago 1 reply      
Interesting! This seems to be the crux of it:

That [2005 study in Pakistan] compared the health outcomes from antibacterial soap and soap that was indistinguishable from and otherwise chemically identical to the antibacterial soap, but without triclocarban. Compared with a control group who received school supplies, children living in households who received soap and handwashing promotion had 52 percent less diarrhea, 50 percent less pneumonia and 45 percent less impetigo. Impetigo, a skin infection, was a particularly important outcome, because laboratory studies had suggested that triclocarban would have antibacterial activity against the organisms that most commonly caused impetigo. There was, however, no difference in any of the health outcomes between children living in households who received the plain soap compared with children who received the antibacterial soap.

Related, it appears school supplies are not only ineffective against bacteria that cause diarrhea, pneumonia, and impetigo, but may cause these ailments.

2
Someone1234 6 hours ago 5 replies      
I'm glad they're doing this.

This is why I moved our household over to the "Method" brand a few years ago (no triclosan). I'd happily move again if something else was safer, but I do enjoy foaming hand soaps.

Also why on our newborn I purchased Waterwipes (wiper + fruit juice only). They massively reduced diaper rash too compared to the Huggies branded wipes we were using before (and can be used on the face because they won't upset stomachs if consumed).

I like the FDA and am glad they exist, but feel like they were slow to act in this case. We've know for almost ten years (via peer reviewed science) that these compounds are unsafe and ineffective.

3
grenoire 7 hours ago 0 replies      
In line with this development, the United Nations also recently decided to release a unanimously ratified declaration regarding antibiotics [1]. It concerns educating the public on the use of antibiotics, development of new antibiotics, and surveillance and regulation of current use of antibiotics on humans and animals.

[1]: http://www.independent.co.uk/news/science/un-signs-groundbre...

4
keepper 5 hours ago 3 replies      
This reminds me ( and I guess every time i leave a shared bathroom ), how people really don't know how to wash their hands[1]. I guess anti-bacterial soap came in as a nice marketing gimmick to quick hand washing.

I've stopped shaking some people's hand's after seeing them "drizzle some water" for a second after, well, you know what.

[1] http://www.mayoclinic.org/healthy-lifestyle/adult-health/in-...

5
Negative1 3 hours ago 0 replies      
Quick Summary: Probably safe but not any more effective than normal soap. Possibly harmful to the environment so not worth the risk of environmental damage.
6
nsxwolf 4 hours ago 1 reply      
Are they going after the toothpaste next? I've used Colgate Total for like 20 years now and my gums really notice when I run out and use something else for a week.
7
secabeen 5 hours ago 3 replies      
It's going to be really interesting to see what Henckel does with Dial soap, which has been defined by its antibacterial ingredient for years. (I actually use Dial Basics in the shower because it's the closest thing I can find to cheap, plain soap with no moisturizers or antibiotics, but that's not a product they promote much outside of dollar stores.)
8
zappo2938 5 hours ago 4 replies      
Along the same thought, John's Hopkins tested automatic facets with infrared sensors and standard facets with hand levers for hazardous bacteria. After the study they removed all the automatic facets because the standard facets where much cleaner and saver.[0]

[0]: http://www.hopkinsmedicine.org/news/media/releases/latest_ha...

9
dsjoerg 6 hours ago 0 replies      
One point for science!
10
tn13 6 hours ago 6 replies      
Please correct me if I am wrong but government banned something because it is not useful ? Or were these soaps violating "No Harm Done" rule too ? When is government banning Axe for failure to attract hot chicks ?
11
skgoaspghqoghn 5 hours ago 1 reply      
Best solution: make your own hand soap, or find someone who does it for cheap. Support local products, save money, and save the world at the same time.
21
Fibers in Guile Scheme github.com
33 points by srean  3 hours ago   9 comments top 2
1
dmix 3 hours ago 3 replies      
Anyone using Guile for their projects? Any feedback on their experience with it?

I only know Scheme from reading SICP and enjoyed Clojure but hated the java/JVM part of it. I currently use Erlang for when I need concurrency/performant backends. But I'm not totally satisfied with it (for ex: the weak type system and records).

Edit: oh looks two programs I use all the time are written in Guile: GNU Make and WeeChathttps://en.wikipedia.org/wiki/GNU_Guile#Programs_using_Guile

2
flukus 1 hour ago 1 reply      
What's the state of other language interpreters for guile? It's one of this things I always wanted to have a play with until I remember how much I hate lisp.
22
I Used to Be a Human Being nymag.com
290 points by oscarwao  11 hours ago   178 comments top 23
1
morgante 5 hours ago 5 replies      
To me, this reads like an alcoholic ranting about how everyone should stop drinking. Yes, technology can be addicting. That doesn't mean we're "helpless" or even that it's bad. I'm really sick of these hyperbolic luddites.

Personally, it's not at all a problem. I'm not a technophobeI have an iPhone, use Facebook, etc.but it's done nothing but positively improve my life. I can keep in touch with friends around the world and work remotely from anywhere, all thanks to the beauty of these "distracting" technologies.

However, even though my life is heavily entangled with technology I certainly don't feel "addicted" to it. I have no problem going out into the wilderness for a week and having 0 contact with the world. I certainly don't interrupt conversations to check my phone.

If technology is hurting your life, the problem could just as easily be with you as it is with technology.

2
jonstokes 7 hours ago 4 replies      
The number one thing I've found to be effective in combatting this is sleep. I have to get 8 hours of uninterrupted, unmedicated sleep. 9 hours is even better. And I have to do it for a few nights in a row.

If I do anything at all that interferes with the quality of my sleep, I'm screwed, and The Stream will suck me in. Conversely, things that increase the quality of my sleep -- exercise, diet, not using gadgets after a certain hour -- all increase my ability to fight off Stream-induced distraction.

So that's my recommendation. YMMV.

3
simonbarker87 10 hours ago 10 replies      
I read somewhere recently that smart phones have essentially become random gratification engines in the same way as slot machines. Sometimes on a slot machine you win the jackpot, because of this every pull gives you an endorphin kick, each time we pick up our phones or get a notification it might be an important/interesting thing, therefore we get a little endorphin kick that keeps us checking.

Because of this I turn off all notifications apart from phone calls and text messages. I don't have app badge counters or notifications for email either, if it is that important the person would ring me.

4
resetting 10 hours ago 14 replies      
I'd be interested to hear everyone's strategies to combat this, as I assume many of us that work in the industry encounter similar problems of distraction and inundation.

First, I don't use any devices directly after waking up. I meditate for about 20 minutes upon waking and then try to read fiction for 40 minutes. So, all in, somewhere around an hour of no device distractions before starting my day.

Slack is one of the biggest interrupting factors while coding these days, more so than IRC ever was for me, so I try to have chunks of time during the day with it closed. This is something I've struggled with recently as co-workers always expect to be able to get in touch, but often I really need 30-60 minutes of uninterrupted focus for real tasks.

For personal things, I deleted Facebook and feel quite a bit better. I still scroll Instagram too much. I deleted the Twitter app from my phone and will only check it from time to time on the web. I try to turn on Do Not Disturb mode in the evenings, but it's hard when you have systems that potentially could go down and things could get escalated to you.

On top of that I try to take psychedelics a few times per year, not in any type of party settings, but with people that are close to me. Screens tend to turn up this feeling of disgust when I look at them in that state, so I automatically disengage with them. I find that for at least a short while after the trip my usage of distracting Internet things goes down a lot as well.

Interested to hear other strategies!

5
hitekker 10 hours ago 2 replies      
"I used to be a human being": I like this phrase.It captures, poetically, the existential despair of losing the conditions of humanity. Regardless of if the author's particular conditions necessitates the implicit drama, each of one us, I'm sure, have felt like we lost what made us "real" or "authentic" at some point or another.
6
int_19h 3 hours ago 0 replies      
What I would say to this guy is this.

Your "virtual" life is also real. The news that you read, happen in real world. The people whom you talk to are real people. There's no "real life" and "non-real life". Everything that happens to you is "real life" by definition. That includes social networks etc.

The only thing that matters is whether it is a life that you're comfortable with, or not. And there is certainly a point that many people are not actually comfortable, but forced into comforming.

Being uncomfortable about it because it's "not real" is a fallacy, though.

7
Raphmedia 9 hours ago 2 replies      
I fixed my brain by cancelling my internet services. All I had was 1gb of cellphone data.

Never been happier.

Right now I am back with an internet subscription and I'm miserable. I am an internet addict and having internet at home is very bad for me. I strongly consider cancelling my internet suscription again.

8
FuNe 11 hours ago 1 reply      
> The interruptions often feel pleasant, of course, because they are usually the work of your friends. Distractions arrive in your brain connected to people you know (or think you know), which is the genius of social, peer-to-peer media.

Silly as it will sound I'd never thought of it that way. Cunning , brilliant and cunning again.

9
mesmerizingsnow 4 hours ago 1 reply      
What's clicked for me is:

https://selfcontrolapp.com/

It's an open source project either:

https://github.com/SelfControlApp/selfcontrol/

What I do is, essentially, the following: every day at around 0-1 am (before I go to sleep) I set a timer so that it would fire at 7-8 pm (that's when I would get home). This allows me to force myself from browsing unnecessary sites (facebook, twitter, HN, reddit etc.) that would otherwise distract me from doing meaningful work.

As for the other devices, not long ago I would have twitter, facebook, VK, and a few others social media apps on my iPhone. I deleted everything except twitter which I check rarely (<= 10m a day). Deleting the facebook app contributed greatly to my smartphone's battery life either.

10
agentcooper 5 hours ago 1 reply      
Shameless plug: I created a Chrome extension which makes me wait 1 minute before proceeding to the one of distracting websites, helps me a lot. Maybe it will help you too: https://chrome.google.com/webstore/detail/better-things-to-d....
11
RUG3Y 8 hours ago 0 replies      
I spent the weekend working on my Honda, replacing the timing belt, water pump, clutch cylinders. It's been a long time since I've come up for air, so to speak, and spending the entire weekend off the computer was immensely gratifying. I found more satisfaction in fixing my car than I have in anything for a long time. I found myself thinking that perhaps being on the internet constantly has caused me to lose some taste for life.

I'm taking a vacation soon, 9 days with no computer will be a much needed reset.

12
Disruptive_Dave 6 hours ago 0 replies      
Meditation, meditation, meditation. These devices/games/stimuli are training your brain, so it's up to you to actively combat that with your own training. That's how I view it - you are being programmed one way or another, either by outside forces or by your own doing.
13
tim333 9 hours ago 1 reply      
I rather like the internet connected version of living.
14
ryandrake 8 hours ago 7 replies      
Did this really need a 7000+ word essay? If you don't like Facebook, don't use it. If you're concerned about "distraction overload" then remove things from your life you find distracting. Nobody is putting a gun to your head and forcing you to use smartphone apps and 10 different text messaging systems. If you want to live as a technology-free hermit, go right ahead. But don't try to argue that I should live like you.

The author instead seems to be taking his personal opinions about technology and trying to apply them to everyone. It's not enough that he doesn't like these things. We should not like them either and here's why. Time with my kid is invalid because I also have the TV on? GPS leads me to stop remembering things? Emojis are unsuitable replacements for voicemail? The author imagines restaurants where smartphones must be surrendered upon entering. Cool, you've described a place I'll actively avoid. Where's that "stop liking things I don't like" GIF when you need it?

"Our enslavement to dopamine?" -- how about YOUR enslavement to dopamine? Leave me out of it.

15
n72 5 hours ago 2 replies      
I'm considering writing a plugin which, much like an ad-blocker, removes click bait links. There would be various black lists one could use. Any feedback on implementation appreciated.
16
CMYK5 9 hours ago 0 replies      
The problem is that these technologies are increasingly necessary to function within normal society. Do we somehow regulate the allowable behaviors of apps? Is it on us to improve our self control so we can act as engaged, but distant participants?

There's also the ethical question around people not being aware of their obsessions/addictions (especially in the mobile gaming space)

17
hprotagonist 7 hours ago 1 reply      
reminds me of the poem "The World is Too Much With Us", by William Wadsworth:

>The world is too much with us; late and soon,

Getting and spending, we lay waste our powers;

Little we see in Nature that is ours;

We have given our hearts away, a sordid boon!

This Sea that bares her bosom to the moon;

The winds that will be howling at all hours,

And are up-gathered now like sleeping flowers;

For this, for everything, we are out of tune;

It moves us not. Great God! Id rather be

A Pagan suckled in a creed outworn;

So might I, standing on this pleasant lea,

Have glimpses that would make me less forlorn;

Have sight of Proteus rising from the sea;

Or hear old Triton blow his wreathd horn.

18
n72 5 hours ago 0 replies      
The best thing I found to combat this is make distraction difficult. To that end, the best two things I did was write an FF plugin to hide FB's news feed and removed the News feed from my iPhone. Now I actually have to type a URL if I want distraction, which becomes something of a deterrent.
19
marmot777 7 hours ago 1 reply      
I see his point and used to read his blog. But could he have been just as effective online while scheduling in a real life? That is, he could have achieved as much or more with some measure of balance. Easier said than done.

I'm not saying I have a life but I'm hoping to get one real soon now.

20
SandersAK 9 hours ago 0 replies      
This article could have been called "here's all the thoughts I have after watching anime for the first time"
21
nxzero 9 hours ago 1 reply      
Just wait until machine learning gets really good at providing information that's of value that you "must have" to get through the day.
22
kubernetizen 6 hours ago 0 replies      
Gossip + Random Rewards = Addiction
23
jkot 11 hours ago 2 replies      
> I couldnt check my email or refresh my Instagram

> ... trying to describe what I was feeling. The two words extreme suffering won the naming contest in my head...

23
Show HN: Lemonade the world's first P2P insurance company lemonade.com
88 points by gilsadis  3 hours ago   78 comments top 20
1
h4nkoslo 2 hours ago 4 replies      
This is wildly un-new. Mutual insurance companies have existed for literally hundreds of years. Oldest one I can find in 5 minutes dates to 1762, and if you include merchant insurance organizations, probably <1400 (although those probably end up looking more like equity arrangements).

https://en.wikipedia.org/wiki/Mutual_insurance

https://en.wikipedia.org/wiki/The_Equitable_Life_Assurance_S...

2
samfisher83 2 hours ago 3 replies      
It looks like these are the companies that are really insuring you:

Lloyds of London, Berkshire Hathaways National Indemnity, XL Catlin etc.

Basically they are buying a policy from one of those companies adding 20% and selling it to you.

A insurance company works by spreading risk over a large area. By selling everything in NY they are increasing their correlation which raises risk. One of the reasons for the sub prime crisis was no one expected housing to fall in all markets at the same time.

3
CodeWriter23 2 hours ago 4 replies      
I got to "enter your name" and leaked out of the funnel. I'm willing to give up my zip code to find out how you compare to my current provider. Give me some good rates to entice me out of the rest of my data.
4
allendoerfer 2 hours ago 3 replies      
How is this more "P2P" than other insurance companies are? Non-profit, yes, but the mechanism seems to be the same. There is still a central pot everybody pays into.

I would like a non-profit that just pays back the spare money even more. I do not know how this would work with regulations. I guess in Germany this could be done through a "Genossenschaft", which Wikipedia tells me has an US equivalent called co-op. Would this actually work?

Edit: Realized that you could just grant discounts as there cannot be a profit anyway. Would be awesome to see several companies with the same model, first competing on prices and ultimately the percentage of the fixed fee.

5
mikeryan 2 hours ago 1 reply      
This looks awesome I might check it out.

But the "P2P" branding is likely going to be confusing to a lot of people. In fact even after reading the explanation I still don't understand the peer to peer model in this context and I know what Peer to Peer means.

6
daschreiber 2 hours ago 1 reply      
Daniel from Lemonade here. Totally understand how P2P can be confusing as a term. What we mean by it is that we use each group's premiums to pay their claims, with leftover money going back to the group's common cause. To us P2P is a shorthand for: 'it's not our money'!
7
avitzurel 1 hour ago 2 replies      
I read some of the comments about the funnel and I set out to try it myself.

There are 8 steps to get a quote

1. First and last name2. Full address3. question (renter/owner)4. roomates/alarm5. current owner of insurance?6. Jewelry over 1000$?7. email, birthday8. Quote, which seems highly generic and could be done without 6 of the 7 previous steps.

I can't even imagine the conversion rate from just checking it out to paying customer, it can't be too high at all (outside the founders circle).

ZipCode -> Quote should be the only step. The rest should happen after you convinced me about your value. By the way, don't email thisisridiculous@gmail.com, it's not really my email.

8
gruez 2 hours ago 1 reply      
>A transparent 20% fee to run everything

how does that compare with the profit margins of a traditional insurance company?

9
CaveTech 2 hours ago 0 replies      
From what I can tell P2P means it's essentially an insurance co-op. It's definitely not a world first - I had renters insurance over 10 years ago that worked this way (except I got actual dividends instead of donations in my name).
10
ag56 1 hour ago 0 replies      
A better -- but still not really P2P -- example of social insurance is https://wearesosure.com (UK mobile phone only for now)

With So-Sure you link up with friends and are bonused if nobody claims. Of course that means nobody links with that friend that always loses their phone, which in theory reduces their risk and pays for the bonus.

11
sbuttgereit 1 hour ago 1 reply      
So... what are the supported causes? What is the criteria for a cause to be qualified for support?

There are charities and causes I do support and there are those that I don't. There are charities that oppose each other in their stated goals as well.

I see a section that talks about becoming a supported charity, but nothing about criteria or who is already in.

12
utternerd 2 hours ago 0 replies      
I don't follow how this is P2P, and unfortunately it seems only available for New York zip codes?
13
gilsadis 1 hour ago 1 reply      
Hey, Gil from Lemonade here. I see quite a few p2p related comments here and I totally understand how P2P can be confusing as a term. This video can help understand the concept - https://www.youtube.com/watch?v=6U08uhV8c6Y.tl;dr: We use each group's premiums to pay their claims, and unclaimed money goes back to the group's common cause.
14
thecosas 1 hour ago 0 replies      
Assuming you guys are gathering some data off the "get a quote" form to figure out where the most interest is outside of your current market.
15
matiasz 2 hours ago 0 replies      
On the home page, the apostrophe in "World's" should be a curly apostrophe, not a straight single quote.
16
vjvj 2 hours ago 1 reply      
Cool concept, looking forward to seeing roll out. Under your definition of p2p would heyguevara.com not fall into this too?
17
cheriot 2 hours ago 0 replies      
I don't understand the P2P claim either, but I'll upvote anything involving Dan Ariely.
18
Vendan 2 hours ago 0 replies      
Why is the "For New York" below the fold?
19
executive 2 hours ago 0 replies      
app != P2P
20
mmanfrin 2 hours ago 0 replies      
So will this P2P insurance company employ an AI with a Bot interface to help people handle their claims? Maybe they can apply some Deep Learning?
24
Three Sheldons buzzfeed.com
17 points by samclemens  2 hours ago   2 comments top 2
1
woodruffw 43 minutes ago 0 replies      
This was a real pleasure to read (in terms of prose and not content, obviously). I'm gratified to see BuzzFeed make the jump to professional, long-form pieces.

Kudos to the author for the excellent writing, and to the Sheldons for sharing their story. Speaking as a white man who grew up 8 blocks from the apartment mentioned in the article, it's both surreal and saddening to learn how drastically different (meaning unfair) their lives were and continue to be.

2
Mauricio_ 1 hour ago 0 replies      
Top 3 Sheldons that will blow your mind!
26
Why Finnish babies sleep in cardboard boxes (2013) bbc.com
314 points by stevekemp  13 hours ago   226 comments top 32
1
chrissnell 9 hours ago 5 replies      
I love this, especially the inclusion of the book that helps new parents with the basics. Becoming a parent is such an overwhelming avalanche of advice (good and bad) and piles of crap. There is a massive industry dedicated to preying on the paranoia and navet of new parents. With that first baby, you're awash in a sea of ridiculous products like microwave baby bottle sterilizers and electric bottle heaters. By the second or third month, you come to realize just how worthless all of this junk is and you toss most of it and go back to the basics--the basics that are included in the Finnish baby box. How much easier it would have been if someone had handed us a box and a book and said, "Everything you will need for the first three months is in here."
2
stevekemp 12 hours ago 9 replies      
Posted because we just received our 2016 box, and I thought it was fascinating to see the contents as a Scottish man:

* http://imgur.com/a/I0NYI

3
pingec 11 hours ago 2 replies      
If anyone else wondered why the Finnish baby box includes condoms: http://help.finnishbabybox.com/article/9-why-do-your-baby-bo...
4
martinrame 9 hours ago 3 replies      
I feel sad for reading this, here in Argentina until December 2015 we had a similar state backed plan called Qunita. Now the new right-handed government cancelled the plan without delivering 60k boxes and a judge is trying to burn those arguing insecurity (but none of the 1000s of mothers who received them found any glitch in the boxes).

Please read more (in spanish): http://www.infobae.com/politica/2016/09/14/bonadio-ordeno-de...

5
scraft 10 hours ago 1 reply      
I have seen this one before, but was happy to read through it again. I have zero interest in having children, but think this is a great idea and would be more than happy for my taxes to help pay for it. The thing is, if this did exist in the UK (where I live), it feels like it would be the first thing to be dropped at the next budget - feels like the only chance it would have of existing in the UK would have been if it was introduced 25+ years ago :(
6
Maarten88 10 hours ago 1 reply      
> it's designed to give all children in Finland, no matter what background they're from, an equal start in life

So nice to read this! It's a pity that ideals like these are things from the past, completely gone in our western, capitalist culture. Words like those now mostly invoke feelings of cynicism and snarks about socialism or communism.

7
jbredeche 11 hours ago 1 reply      
There's a company that sells a version of the Finnish baby box, and we got one for our kid: http://www.babyboxco.com/ - everyone loved it. My wife is part Finnish, but we're in the US, so this was a good approximation.
8
fao_ 11 hours ago 3 replies      
tl;dr: It's not because the cardboard boxes are significantly better than cribs and other sleeping methods, it's the fact that the box comes with almost everything a new parent needs to look after kids, which means they don't have to shell out $foo-money to be able to take care of the children properly, and aren't left worried that they left something out.
9
eggy 9 hours ago 2 replies      
I like the idea of all of the goodies, but as I have mentioned in another reply, I am on my third child, and they have always slept in bed with us, so the box and mattress is still weird for me. I have even slept with one of my babies in a small bed for all three of us with no issue.

We have never owned a crib or bassinet by the bedside even.

I may be wrong, but I remember hearing it was during Victorian times that this became popular.

I know there are people afraid of smothering or crushing their infants. Most negative studies seem to come out of the US. Other countries don't seem to have the same issues cited in US studies. I think it is common sense not to sleep with your baby in bed if you have waterbed, or tons of pillows, a soft mattress and a fluffy comforter.

My wife and I don't drink alcohol either, so no possibility of being so drunk that we would be unaware of smothering our baby, and neither of us is hypermobile in bed. All mammals keep their young close. The 'co-sleeping' is a modern term that makes it sound out of the ordinary, but in fact it is very ordinary.

People will say they slept better and the baby got used to being in another room, or across the room, but I can say from three babies worth of fathering, they get comfortable real quick being in bed with their parents.

10
keepper 7 hours ago 2 replies      
<rant>Not directly related to the article, but reading thru the series of negative replies when a poster suggests "hey, this is a great idea, we should do it in the US", is disheartening.

A great deal are so fiscally short sighted, misguided, and downright hostile, that it really takes away any hope for the "average American". HN demographic should represent some of the best of the best of the US, and well... i guess this kinda explain the rise of Trump.</rant>

11
oftenwrong 5 hours ago 1 reply      
Last time I came across an article about this, I found out that there is a company that sells a version of the "Finnish baby box".

https://finnishbabybox.com/

12
_ph_ 2 hours ago 0 replies      
As millions of cats have shown, there is no better place to sleep than a cardbord box :)

Joking aside, this looks like a fantastical practical approach to social support. No complex and very expensive program, just a very practical "starter set" for parents. It relieves both a possible financial burden and of course makes sure, that nothing immediately important item isn't present at a time when parents probably think about a lot of things but not shopping.

13
NetStrikeForce 9 hours ago 0 replies      
We had a baby a few months ago. We got so many clothes from friends and family, some of it brand new, that our baby haven't had the chance to use it all before outgrowing it.

We even got the pram, cot and car seats (plural!) for free. It's crazy.

Any clothing we bought was just because we liked it and not because we needed it. Seriously, I don't know how is it in northern cultures, but where I come from it is extremely common to get the clothes from your cousins (and pass them on to the next :)), regardless of your money (unless you are rich, in which case I don't know how it works :)).

I'm not sure if this is a Latin thing, Southern European thing, Mediterranean thing or what :)

Edit: My point being: Corruption (as mentioned by others in different countries) and the very strong family support networks in other cultures might be the reason why this doesn't exist in other European countries.

14
dancek 5 hours ago 0 replies      
You actually have the choice between the box or 140 euros, but not many people take the money. We did, once.

If you have twins, you're allowed three boxes---if triplets, six. We had twins so we opted for two boxes and the money.

15
nkassis 11 hours ago 1 reply      
As a way to bring something like this to the US. I wonder if something like this could work as a similar model to Toms. Buy one give one away to a family need.
16
pxeboot 9 hours ago 0 replies      
They do give these out at some US hospitals:

http://www.adn.com/alaska-news/health/2016/07/04/alaskas-lar...

17
whafro 8 hours ago 0 replies      
I got two of these when our son was born a couple years back, and they were amazing. I posted a review and tons of photos:

https://www.care.com/c/stories/580/a-year-with-the-finnish-m...

Yes, we did use the box. Yes, we did use all the clothes, lotions, toys, etc. In fact, our son basically wore nothing except eurogarb for his first year.

There are some commercial versions of this available in the US, though they're relatively pricey. The fact that this is a public benefit in some places is truly awesome, and something I wish could be politically tenable here in the US.

18
Animats 4 hours ago 0 replies      
19
jcoffland 10 hours ago 2 replies      
We should try giving useful items to welfare recipients instead of just money. It would be a great way to insure that the support was going to the right things. Why not give kids a back to school kit? This could be huge for kids in situations where no one will buy them the things they need for school. This happens a lot more than most people know.
20
toomanybeersies 10 hours ago 0 replies      
The BBC did a follow up article on this recently: http://www.bbc.com/news/magazine-35834370

Apparently spurned by the article, a few startups have appeared offering similar boxes in other countries.

21
arcanus 11 hours ago 2 replies      
If only the USA had a similar program!
22
riprowan 10 hours ago 0 replies      
Baby-box-industry lobbying intensifies
23
monkpit 11 hours ago 0 replies      
Build your own baby kit. (Just add baby)
24
tzakrajs 11 hours ago 1 reply      
That gear looks well designed.
25
penaman56 6 hours ago 0 replies      
I've heard that Finns even have another box for dads as 'paternity package' which complements the content of maternity package. But that is probably not state sponsored: https://www.instagram.com/p/BCj-EgjHrhs/
26
penaman56 6 hours ago 0 replies      
I've heard that Finns even have another box as a 'paternity package', which complements the content of the maternity package. But I guess that is not state sponsored: https://www.instagram.com/p/BCj-EgjHrhs/
27
DanBC 11 hours ago 0 replies      
It's a great programme.

You might be interested in some of the comments here, from 3 years ago. https://news.ycombinator.com/item?id=5817728

28
vishalzone2002 10 hours ago 1 reply      
Great idea. I think in US, we can build a warby parker model for this.
29
marknutter 8 hours ago 0 replies      
I don't like how they correlate this cardboard box starter kit program to Finland's low infant mortality rate. The claim that the U.S. has such a high infant mortality rate compared to other industrialized nation has been thoroughly debunked[1].

[1] - http://www.washingtontimes.com/news/2014/oct/3/editorial-the...

30
gdelfino01 10 hours ago 1 reply      
In Venezuela the babies are being put in cardboard boxes too: http://www.dailymail.co.uk/news/article-3800058/The-youngest...
31
ChoHag 10 hours ago 0 replies      
Universal Baby Incubation will never work. Mothers will no longer buy anything for their baby because sufficient is supplied by the state.
27
Mobile financial services would increase emerging economies' GDP by $3.7T qz.com
21 points by prostoalex  3 hours ago   4 comments top 3
1
ekpyrotic 42 minutes ago 0 replies      
I'm one of the organisers of the FinTech For Good Summit in London (more info: http://fintechforgoodsummit.com/).

If this topic is interesting to you and it is a discussion you want to get involved in, please do email me on j@greenaway.me. All welcome!

We are building a powerful group of people to advocate for the transformative impact of FinTech in developing economies, and we'd love to start a conversation with you too.

2
executesorder66 3 hours ago 1 reply      
... if more people could handle their finances from their mobile phones

is the rest of the title.

Also, the bank I work for allows mobile banking for 11 countries in Africa. 11/54 is not bad if you ask me.

3
lifeisstillgood 1 hour ago 0 replies      
I am trying to envisage the different possible ways new banking services can emerge. Can we ever expect a crypto currency to work? Or are we forever reliant on trusting third parties (ie banks)?

Having mobile banking services is possible of course but actual banking services are more than tracking my latte spend. The guy who founded Bank Of America walked into San Francisco the day after 1906 earthquake with a wheelbarrow of cash and started lending to the shop and business owners who needed to rebuild immediately.

That's the kind of banking service Africa will need in the next few decades, and New York or London or SV are not planning on an app that can do that.

So the consumer banking apps, they will come, but a banking infrastructure, funnelling loans into real businesses and infrastructure - that's a different kind of banking. One it seems we forget.

28
Microsoft aren't forcing Lenovo to block free operating systems mjg59.dreamwidth.org
300 points by robin_reala  7 hours ago   194 comments top 28
1
Hydraulix989 4 hours ago 4 replies      
Their spin that it is "our super advanced Intel RAID chipset" really plays in their favor, given that their BIOS uses a single goto statement to intentionally block access to the ability to set this chipset into the AHCI compatible mode that the hardware so readily supports, as evidenced by the REing work and the fact that other OSes detect the drive after the AHCI fix using the custom-flashed BIOS.

So, why are they reluctant to just issue their band-aid patch to the BIOS -- after all, it's really the path of least resistance here?

Yes, there has been some deflection of blame here. The argument that every single OS except Windows 10 is at fault for not supporting this CRAZY new super advanced hardware doesn't make much sense.

"Linux (and all other operating systems) don't support X on Z because of Y" doesn't really apply when "Z modified Y in a way that does not allow support for X."

To state it more plainly, this "CRAZY new super advanced hardware" has a trivial backwards compatible mode that works with everything just fine, but it is blocked by Lenovo's BIOS.

2
raesene9 5 hours ago 8 replies      
Also worth noting Lenovo's official statment on the matter http://www.techrepublic.com/article/lenovo-denies-deliberate... confirming that they have not blocked the installation of alternate operating systems.

It was a shame to see the initial posts this morning hit the top of the page without any more evidence than a single customer support rep. who was unlikely to realistically have inside knowledge of some kind of "secret conspiracy" to block linux installs by Microsoft.

3
AdmiralAsshat 5 hours ago 2 replies      
The moral of the story is that you shouldn't trust a low-level support engineer as a source for official company policy.
4
WhitneyLand 4 hours ago 0 replies      
There was way too much rush to judgement here. Suspicion and skepticism are great, let those fires burn. But let's not condemn or blame until the issue has been aired out from all parties.

- MS shouldn't be blamed based on what the CEO of Lenovo says, let alone what a tech or BB rep says.

- MS shouldn't be blamed for new crimes based on past behavior

Why care about MS or any other megacorp? Because this salem witch trial shit is toxic and should not be condoned against anyone.

Rush to suspicion and demanding answers is great. There is no downside to saving blame for after the facts are in.

5
pdkl95 4 hours ago 1 reply      
There has been a disturbing level of contempt for the people that were concerned about the future of Free Software. There has been a major shift towards more locked down platforms for years ever since iOS was accepted by the developer community. With Microsoft locking down Secure Boot on ARM and requiring it for Windows 10, it is prudent to be extra vigilant about anything strange that happens in the boot process. The alternative is to ignore potential problems until they grow into much larger problems that are harder to deal with.

Obviously vigilance implies some amount of false positives. It is easy to dismiss a problem once better information is available. It's great that this Lenovo situation is simply a misunderstanding about drivers, but that doesn't invalidate the initial concern about a suspicious situation.

6
farcical_tinpot 2 hours ago 1 reply      
Seeing a manufacturer use fake RAID, by default, on a single disk system, then unfathomably hardwiring this into the firmware so it can't be changed, then have a Lenovo rep actually admit the reason with the forum thread censored and then see this kind of defence is downright hilarious.

Garrett should be condemning Lenovo for not making a perfectly configurable chipset feature....configurable and defending Linux and freedom of choice on hardware that has always traditionally been that way. But, no, he doesn't. He defends stupidity as he always does.

7
hermitdev 6 hours ago 1 reply      
For what it's worth, I've had issues with Intel RST under Windows as well in mixed-mode configs. My boot device is an SSD configured for AHCI and I've a 3 drive RAID array. On a soft reset of my PC, the BIOS won't see the SSD. The completely nonobvious solution? Make the SSD hot swappable. Not a Lenovo PC, either. Been going on for years. Had to do a hard reset every time I had to restart for years before I found a solution to this.
8
facorreia 5 hours ago 1 reply      
> Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware.
9
NikolaeVarius 5 hours ago 0 replies      
Standard culture of outrage before actually taking more than 5 seconds to think about something and consider other possibilities.
10
rbanffy 4 hours ago 1 reply      
Wasn't Lenovo the company that shipped unremovable malware with laptops? Considering the almost impossible to disable Intel management stuff is also there, I can only imagine the kind of parasite living on these machines.

Why would anyone buy their stuff?

11
bsder 45 minutes ago 0 replies      
The setting is almost certainly because of Microsoft. It is almost certainly part of their license agreement to block installation of anything older than Windows 10.

The fact that Linux got caught in it is just collateral damage.

12
rburhum 4 hours ago 2 replies      
What is crazy to me is that Lenovo is usually the brand that people recommend for Linux laptops. They are shooting themselves in the foot here. They may think that the number of people on Linux is too small, but I bet it is bigger than they think. It is just that there is no easy way to accurately census the amount of Linux users on their HW.
13
guelo 4 hours ago 1 reply      
> Why not offer the option to disable it? A user who does would end up with a machine that doesn't boot

The modder that flashed the custom BIOS was able to boot linux on his first try.

14
seba_dos1 1 hour ago 0 replies      
Pushing Intel to provide the drivers or at least documentation would be the best solution - the BIOS lock would become irrelevant.

However, I don't agree with conclusion that Lenovo isn't to blame. They went out of their way to ensure that even power users playing with EFI shell won't be able to switch to AHCI mode.

I don't care about Microsoft here. Lenovo showed its bad side and I probably won't be buying their devices anymore - which is a pity, as I'm writing this on my Yoga 2 Pro, with my company's Yoga 900 (fortunately older, unblocked revision) nearby and I liked those devices.

15
guelo 4 hours ago 2 replies      
Without any comment from Lenovo or Microsoft this guy is speculating the same as everybody else.
16
StreamBright 4 hours ago 0 replies      
Somebody should notify the guys who went really deep condemning Microsoft of cutting shady deals.

https://news.ycombinator.com/item?id=12545878

17
youdontknowtho 3 hours ago 0 replies      
Its amazing that Linux can so thoroughly have won in the device world and yet MS is still every fan boys favorite boogeyman. This is such a non event.
18
rukuu001 4 hours ago 0 replies      
I'm surprised at the incredulity expressed here, given MS's history of dealing with OEMs. See https://en.m.wikipedia.org/wiki/Bundling_of_Microsoft_Window...
19
savagej 4 hours ago 1 reply      
Why would anyone ever buy Lenovo? It's malware, spyware, and harmful to users. I buy HP or Samsung laptops to run Fedora. Just accept that Lenovo is not IBM hardware, and that it is lost to us.
20
huhtenberg 5 hours ago 3 replies      
Yeah, sure, Microsoft is now all white and fluffy. Best friends forever.

How about we pay some attention to the second part of:

 Lenovo's firmware defaults to "RAID" mode and ** doesn't allow you to change that **
Power savings or not, but locking down storage controller to a mode that just happens to be supported by exactly one OS has NO obvious rational explanation. Either Lenovo does that or Windows does. This has nothing to do with Intel.

21
youdontknowtho 3 hours ago 0 replies      
Of course they aren't but how can I feel morally superior with that fact?
22
hetfeld 4 hours ago 1 reply      
So why i can't install Ubuntu on my Lenovo laptop?
23
colemickens 3 hours ago 1 reply      
Oh it's funny to see the comments in this thread talking down about people on reddit when the misplaced outrage was just as loud here. In fact, I got buried here for pointing out that the claim was BS and unrelated to SecureBoot where at least Reddit took it thoughtfully and realized it was probably just a bullshit statement from a nobody rep that got blown out of proportion.

Sorry to be that guy, but the elitism is pretty misplaced anymore...

24
lspears 1 hour ago 0 replies      
isn't
25
simbalion 5 hours ago 4 replies      
26
throw2016 2 hours ago 0 replies      
Some commentators seem to be more keen on labelling others conspiracy theorists than consider the possibility that MS and Lenovo could be up to no good.

The only way to convince these folks it seems would be a smoking gun or even better a signed confession from satya and lenovo admitting to shady behavior.

Since that's not how shady behavior works in the real world presumably many here are supporters of the camel in the sand approach with a zero tolerance policy towards non conforming camels.

27
intopieces 4 hours ago 1 reply      
FTA:

"For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule."

This is a really poor argument, and slightly disingenuous. Sometimes, people change their use for a device. Maybe they want to explore linux in the future, maybe they want to sell the laptop to someone who wants to use it for linux...

That the blame is being possibly misdirected ought not to detract from the fact that blame is necessary. If users don't vocally oppose measures like this, the industry will assume that this kind of restriction is reasonable. It's not. Yes, power management is important, but anyone who puts linux on their laptop will quickly learn there are limitations to the features of that device that were originally tailored to the OS the device shipped with. That's a good lesson, and a good opportunity for a community to develop around the device (if it's good enough) to mitigate those deficiencies and adapt them for the particular linux distro.

In short, Lenovo is at fault for not being up front about this limitation, for not explaining it, and for not devoting at least some resources to mitigating for their potential linux-inclined users.

Then again, perhaps a linux-inclined user might also be one of the many that don't trust Lenovo after their self-signed certificate scandal.

28
farcical_tinpot 3 hours ago 1 reply      
I wondered when Matthew Garrett would pipe up and deepthroat Microsoft again. He duly obliges.

When I first heard this I thought this was what I always thought would eventually happen with Secure Boot, but we haven't quite arrived at that point yet. Give it time when we eventually end up with critical mass.

What seems to be happening is similar to Kaby Lake in reverse. You restrict hardware and drivers needed to no only exclude other operating systems, which is a side-effect, but build in hardware obsolescence so you'll find yourself without any drivers or means to upgrade in future Windows versions.

You don't lock any chipset into an idiotic RAID mode in a laptop with a single disk. To claim this is done for power management reasons over perfectly standard AHCI is so stupid it isn't even funny, but Matthew likes defending stupid. He swears blind that Secure Boot will always be in a position where it can be turned off and Microsoft will not try anything on with key access. Because, you know, Microsoft's word and all that.

Be in no doubt, Microsoft and OEMs together want us all to throw away hardware and upgrade more regularly.

29
How Dropbox securely stores your passwords dropbox.com
208 points by samber  10 hours ago   155 comments top 33
1
Someone1234 10 hours ago 5 replies      
I cannot see any obvious weaknesses in this scheme.

It seems to address a known pain point in bcrypt (max length), implements a pepper in a secure way (which cannot inadvertently degrade security), and is otherwise doing things which are best practices (high work factor, per user salt, etc).

I know peppers remain controversial (some people claim they're pointless, and make a good argument). But ultimately nothing Dropbox is doing with peppers in this article makes your password easier to break, only harder.

I'd call this scheme 10/10.

2
borplk 6 hours ago 4 replies      
As someone who exclusively uses a password manager with random unique passwords for each service it always amuses me to see posts like this.

Years ago I relieved myself from the stress by using a password manager. Now for all I care they could be storing it in plaintext and it wouldn't make a damn difference to me. Problem solved.

3
0x0 10 hours ago 1 reply      
It's nice to store passwords securely, but it's also important to remember to, you know, actually verify them afterwards ;)

https://techcrunch.com/2011/06/20/dropbox-security-bug-made-...

4
cperciva 7 hours ago 1 reply      
We considered using scrypt, but we had more experience using bcrypt.

Ok, fair enough...

The debate over which algorithm is better is still open, and most security experts agree that scrypt and bcrypt provide similar protections.

... wait, what?

5
red_admiral 9 hours ago 0 replies      
Here's how facebook does it: http://chunk.io/f/72f9c680ac2a4777b6dbf33c532e1d3c.jpg(Alec Moffat talking at RealWorldCrypto)

Seems like the combination of strong hash + encryption on a HSM is the way to go these days. Dropbox's scheme looks good to me.

6
cstrat 2 hours ago 1 reply      
I am wondering how they store OSX users administrator password, since it isn't being hashed - they actually store the password somewhere... it would be nice if that were addressed somewhere.

See discussion:https://news.ycombinator.com/item?id=12457067

7
faragon 3 hours ago 1 reply      
From the diagram, Dropbox stores no passwords: it stores an encrypted hash (hasing in two steps, SHA512 and then "bcrypt") of the password. I.e. stored = AES256(bcrypt(SHA512(password), per_user_salt, 10), global_key).

I would like to know if "salted-bcrypt"+SHA512 hashing is really safer than using just SHA512 (e.g. because of the risk of making locating hash collisions easier, etc.).

8
aomix 9 hours ago 1 reply      
Cool approach, you need to compromise two separate servers just to have a usable password database you could run tools against. A key compromise can be fixed quickly and a password compromise is useless without the key.
9
sandGorgon 10 hours ago 1 reply      
Does anyone know what is a good practice to create a "vault" - the kind that is used for the Pepper in this case?

I have heard of it being a separate, ip restricted server with daily changing ip address, etc. A simpler use case would be to store oauth2 tokens or some kind of PII

10
evunveot 8 hours ago 3 replies      
> Some implementations of bcrypt truncate the input to 72 bytes, which reduces the entropy of the passwords.... By applying [SHA512], we can quickly convert really long passwords into a fixed length 512 bit value, solving [that problem].

This part confused me. How can truncating to 72 bytes be a more severe reduction in entropy than generating a 64-byte hash?

11
OskarS 10 hours ago 2 replies      
If we use the global pepper for hashing, we cant easily rotate it.

I don't get this point. Why is it harder to rotate pepper for a hash compared to an encryption key?

12
ppierald 9 hours ago 2 replies      
I would be interested in the details of the storage mechanism of the global pepper. Is this in an HSM? For AWS customers, something like KMS? There are then huge operational and redundancy issues to think about. Failovers for your HSM. Handling the possibility that AWS might not be available or corrupt the key, other cases. These things are easy to whiteboard, but when the rubber hits the road and you need to think about all the operational edge cases, things get hard quick.
13
martinko 10 hours ago 4 replies      
A bit of an overkill, no? Doesn't bcrypt suffice?
14
nodesocket 3 hours ago 0 replies      
So just to reiterate, taking the sha256 of the password before running bcrypt on it is recommended? Funny, this is the first I've heard of this. You'd think bcrypt would have just implemented the sha256 step into the algorithm?
15
CiPHPerCoder 9 hours ago 1 reply      
Their solution is very similar to the mode prescribed by [1] and implemented in [2].

There are actually two problems with bcrypt:

 - It truncates after 72 characters - It truncates after a NUL byte
If anyone is dead set on following Dropbox's example, make sure you aren't passing raw binary to bcrypt. You're playing with fire.

Additionally, if you're going to use AES-256, don't implement it yourself. Use a well tested library that either uses AEAD or an Encrypt then MAC construction.

[1]: https://paragonie.com/blog/2016/02/how-safely-store-password...

[2]: https://github.com/paragonie/password_lock

16
figers 10 hours ago 0 replies      
How dropbox "NOW" securely stores your passwords
17
oDot 10 hours ago 3 replies      
While this is very impressive, it feels like trying to solve the wrong problem. The real problem is getting rid of passwords (Persona, anyone?).

Don't get me wrong, what's described there is super-important to secure the authentication of today, but what about a word for the authentication of tomorrow?

There already are various solutions. Passwordless[0] is a familiar one for nodejs, and I recently bumped into the promising Portier[1], which is, according to its authors, a "spiritual successor to Mozilla Persona".

[0] https://passwordless.net/

[1] https://portier.github.io/

18
kijin 10 hours ago 3 replies      
A note about combining SHA512 with bcrypt: Don't feed the raw binary output of SHA512 into bcrypt. Use the hexadecimal or base64-encoded form instead. (Dropbox probably does this already, since they mention base64 in passing.)

bcrypt is known to choke on null bytes. Each SHA512 hash has a 25% chance of containing a null byte if you use the raw binary format.

Using hex or base64, of course, decreases the amount of entropy that you can fit into bcrypt's 72-byte limit. But you can still fit 288 to 432 bits of entropy in that space, which is more than enough for the foreseeable future.

19
Jahava 9 hours ago 0 replies      
The blog mentions, "Were considering argon2 for our next upgrade". I suppose they could do in-line upgrades: as users are signing in, the SHA512 is piped through the old pipeline for verification and through the new pipeline for migration. As far as I can tell, there's no way for them to swap bcrypt out for argon2 using just their cold store.
20
Freaky 8 hours ago 1 reply      
> Some implementations of bcrypt truncate the input to 72 bytes, which reduces the entropy of the passwords. Other implementations dont truncate the input and are therefore vulnerable to DoS attacks because they allow the input of arbitrarily long passwords.

Huh? BCrypt works by stuffing the password into a 72 byte Blowfish key and using it to recursively encrypt a 24 byte payload. Either it's truncating, or it's pre-hashing the password to fit much like they are.

The link they use to justify it is funny: http://arstechnica.com/security/2013/09/long-passwords-are-g...

That's just a naive PBKDF2 implementation that's pointlessly reinitializing the HMAC context each iteration instead of just doing it once at the start. The difference between storing a 1 byte and a 1MB password with PBKDF2 should be on the order of a couple of milliseconds.

21
Dowwie 4 hours ago 0 replies      
I wonder why Dropbox didn't mention its robust support for 2nd factor authentication?
22
coherentpony 10 hours ago 4 replies      
May I ask a potentially dumb question? Why store my password at all?
23
allstate 10 hours ago 0 replies      
Really surprised to see they are not using a HSM yet for the global pepper. What kind of physical controls put in place for global pepper currently?
24
aRationalMoose 9 hours ago 0 replies      
Now if only their customer service had been 'quietly' improved over the years.
25
eddd 5 hours ago 0 replies      
too little, too late. Once you loose trust, you never gain it again.
26
tadelle 5 hours ago 0 replies      
What is wrong with bcrypt with cost 13. It makes 8194 cycles on CPU...
27
joepie91_ 8 hours ago 0 replies      
One concern I have here, is that people are going to perceive this post as "this is what you should do and it's easy!", because the post doesn't really address the complexities of implementing this kind of thing.

As a result, we're probably going to have a bunch more issues like this one: http://blog.ircmaxell.com/2015/03/security-issue-combining-b...

I'm not looking forward to having to talk people off that particular ledge for the next several months...

28
mtgx 5 hours ago 0 replies      
So when is Dropbox going to allow users to encrypt files client-side before getting synced to its servers? It should be relatively trivial from both a technical point of view and a UX one.
29
artursapek 9 hours ago 2 replies      
30
awt 10 hours ago 1 reply      
This "promise" model of security needs to end. I "promise" I'll encrypt your password, honest.
31
ashitlerferad 9 hours ago 0 replies      
Why won't people stop using passwords? It is 2016!
32
davedx 10 hours ago 0 replies      
Is this before, or after, Condoleezza Rice uploads them to the NSA?
33
cypherpunks01 10 hours ago 0 replies      
Isn't this the company that authenticated all production users without checking passwords for a few hours, a couple years back?
30
Show HN: Weebly 4 Websites, eCommerce and Email Marketing weebly.com
155 points by drusenko  8 hours ago   69 comments top 18
1
drusenko 8 hours ago 8 replies      
Hey everyone, David (founder/ceo) of Weebly here. Were really excited about this launch, Weebly is now a platform to power a business online with websites, ecommerce, and email marketing all under one roof. In case you havent heard of us, we were W07 (one of the first YC batches) 9 years later, over 40M people have created their site or store on Weebly, and 325M people around the world and half of the US population visit those businesses every month.

Here are some cool things you might not notice:- We were born pre-AWS and actually run our own data centers, have an ASN and manage our own network, host 2PB of data that is geographically replicated in near-real time, and have successfully defended against 200Gbps+ DDoS attacks.

- Weve put a ton of care into bringing all of the pieces together (websites, ecommerce, email marketing) in a super integrated and seamless way. Check out, for example, how you can customize all of the store emails with Weebly Promote (email marketing), when you send out an email campaign you can automatically track sales generated from that email, how we automatically import and create smart groups -- like frequent customers who havent purchased recently -- or how we will even recommend pre-created emails based on actions you take adding new products, putting products on sale, etc.

- The eCommerce platform has been significantly upgraded, with things like real-time shipping (UPS, FedEx, USPS, DHL integrations), abandoned carts, gift cards, a re-built tax & shipping engine, a new store front & checkout, bulk editing and power seller features, and a whole bunch of other cool stuff.

- Check out the apps for iOS and Android. It was pretty hard engineering work to get a full live editing experience with a fast, native UI that need to ultimately render down to a slow WebView (no one else that were aware of has been able to pull this off like this).

- Weve built a web code editor (similar to Mozilla Thimble from a few days ago) thats pretty nifty. Create a site, then go to Theme>Edit HTML / CSS (screenshot: https://www.dropbox.com/s/ry5aeykn1l56l17/Screenshot%202016-...)

- Here are some of the cool new themes: https://highpeak-theme.weebly.com/, https://verticals-business-slick.weebly.com/, https://pathway-financial.weebly.com/, https://urbandine-business.weebly.com/, https://jaysims-oasis-merch.weebly.com/, https://oikos-test.weebly.com/

Our ultimate goal is to create a platform that small to medium creators of all kinds can use so they can focus on what they love doing, and less on the business of running their business. Imagine all the time spent learning from awesome people like patio11 -- what if we could make the whole online side of running your business a whole lot easier? Thats the dream, this is the first step in that direction.

Happy to answer any questions and would love your feedback!

2
atourgates 7 hours ago 5 replies      
One thing I've noticed on Weebly, Squarespace, Wix and other similar "DIY without any coding" website platforms is that all their themes depend on having excellent photography.

Excellent photography is great, but many of the small businesses who are primarily targeted with these services, don't have it. And stock photography looks like, well, stock photography.

I'd love to see a service like this embrace themes that don't depend on great photography. Themes that make good use of typography and other non-image visual elements.

3
SwellJoe 6 hours ago 1 reply      
Weebly is one of the companies from our batch (W07) that I would have wanted to invest in, if given the opportunity. They just build really cool products for non-technical folks. I've been recommending them for years (even though they kinda/sorta indirectly compete with what we build).

I've been surprised by how important and effective email still is to most small businesses, and it's a hard problem to solve; having it integrated with the rest of your site and commerce solution is even harder. Ecommerce is a more obvious need, but has a lot more solutions available, including for non-technical folks.

Congrats on launching cool new stuff all these years later!

4
uses 5 hours ago 1 reply      
This page slows my browser to a crawl. I had to close the page to type this comment. Same if I open it in incognito with no extensions.

Chrome Version 53.0.2785.116 m, Windows 10

5
inputcoffee 7 hours ago 1 reply      
Obviously, this isn't competing with someone who is spinning up their own Django/Rails/Node solution.

The big questions that come to mind:

1. How does this compare with hosting Wordpress. I like that with Wordpress, if some issue comes up, you can find someone who knows the innards and program what you need. Does the user have that level of access?

2. How does it compare with WIX, and the other site building competitors?

3. What if someone has built some great piece of code and I want to install some of the functionality on my site, can I do it?

It just seems that it is a closed enough system that I have to rely on Weebly engineers to do everything.

6
Angostura 3 hours ago 1 reply      
The move from Weebly 2 to Weebly 3 seemed to be very rough - lots of unhappy punters. What lessons did you learn from that move, how did you handle thing differently this time?
7
tucaz 7 hours ago 0 replies      
Accessed from Brazil and the content is translated. Very cool. Congrats!
8
kfk 7 hours ago 4 replies      
Good, but I live in Germany and I don't speak German, so how do I read the content?
9
slater 7 hours ago 1 reply      
I see the HTML that weebly produces now isn't quite as horrendous as it used to be! :D
10
triangleman 7 hours ago 0 replies      
Glad to hear y'all are keeping up with the industry. I will have to give this another look.
11
BadassFractal 3 hours ago 0 replies      
How does this Weebly compare to squarespace, wix and webflow? What's the differentiation?
12
omarchowdhury 7 hours ago 1 reply      
This looks fantastic. We are moving our store from an older CRM and looking for an alternative. We were set on bigCommerce...

Can we bring our own HTML/CSS and integrate with Weebly? Even with custom payment flows?

13
rubidium 7 hours ago 2 replies      
So let's say I'm going to set-up a small business with an online store and other content (blog, etc...). Let's also say I don't care about the tech stack or want to be doing custom css/html.

Weebly looks promising for that, and at $25-50/month isn't too bad.

For people more familiar with this area, what other options are there? Wordpress+shopify? And how do the fees stack up with the different options?

14
zazaalaza 5 hours ago 0 replies      
Haha, you forgot to update this section of your site: https://education.weebly.com/
15
donutdan4114 6 hours ago 1 reply      
Do app integrations have a way of storing arbitrary data on resources, such as products?

For example, Shopify has "metafields" which allow Apps to add strings of data within a namespace/key, which can then be used in the liquid templating engine.

16
jondishotsky 2 hours ago 0 replies      
Excellent team, beautiful product, bravo!
17
S4M 5 hours ago 0 replies      
Is this Viaweb 20 years later?
18
harryf 6 hours ago 2 replies      
iPhone 6 Plus in landscape ... http://i.imgur.com/ZKorFe9.jpg
       cached 22 September 2016 01:02:01 GMT