hacker news with inline top comments    .. more ..    20 Jul 2017 News
home   ask   best   2 years ago   
Ways a VC says no without saying no unsupervisedmethods.com
53 points by RobbieStats  1 hour ago   5 comments top 4
AndrewKemendo 16 minutes ago 0 replies      
These are all still pretty fast No's in my experience.

The worst No's are the ones where they ask to do Due Diligence and then never open the dropbox folder. Or if you are on your fifth meeting and they just keep trying to pump you for competitive information.

So how do you know when you get a Yes?

When you get a wire or a check. That's the only way.

Even a signed note or Equity docs don't mean anything until that money clears.

The best No's I've had were from Bessemer and a16z years ago. Almost immediate and right to the point that they wouldn't invest with specific reasoning/metrics behind them. A++ would get told no again.

lpolovets 12 minutes ago 0 replies      
Most of these basically bucket into "I'm not interested" or "I'm not interested at this time, but I think that might change in the mid-term future."

Why are there so many ways to say No? Because just saying "no" is rude -- although some of the 15 alternatives in the Medium post are even worse because they waste a founder's time. It's like if a recruiter reaches out to you: most people don't reply, or they reply with something like "sorry, this is not a great fit" or "I'm not looking at this time."

FWIW, there are many VCs (though probably not the majority) that give concrete reasons when saying No. When I got into venture capital 5 years ago, many peers told me to be vague in order to maintain option value in a company's future fundraises. That sounded dumb to me because if I were a founder I would want feedback, so I try to give useful feedback when I'm passing. That's worked out well over time, and founders whose companies I passed on often introduce me to other founders, or reach out when they're fundraising again.

lisper 15 minutes ago 0 replies      
He left out an important one: sometimes VCs say now by saying yes. It goes like this:

VC response: We're really interested and we want to do the deal, we just need to wait to hear from partner X who is currently out of town.

Translation: We are about to fund one of your competitors, and we want to string you along as far as possible in the hopes that we can distract you from other fundraising efforts so that you will be less of a threat to our baby.

Comment: It's not a "yes" until the check clears. (And even then you should probably wait two weeks just to be sure.)

EGreg 7 minutes ago 1 reply      
Seriously, with all the new crowdfunding and ICO options, why fight to convince the gatekeepers like VCs when you can first try to convince some percentage of the population to each put in a small amount?
ShareLaTeX Joins Overleaf sharelatex.com
113 points by nipun_batra  3 hours ago   31 comments top 9
cyphar 1 hour ago 2 replies      
> Yes. Both Overleaf and ShareLaTeX are committed to ensuring that all of the open ShareLaTeX code base will remain open source and will continue to be actively developed.

That's a fairly disengenous answer to the question. The code is AGPLv3+ licensed and they are not the sole copyright holder (it is true that that have a CLA[1] but from a quick reading the CLA says that they "agree to also license the Contribution under the terms of the license or licenses which We are using for the Material on the Submission Date").

What people want to know is whether ShareLaTeX is going to just become a tiny free software part of a larger proprietary platform. It appears to me that this is likely going to be the case, which is a real shame since I've always respected that the entireity of ShareLaTeX was AGPLv3+.

I hope ShareLaTeX doesn't become another victim of "Our Incredible Journey"[2].

[1]: https://sharelatex.wufoo.com/forms/sharelatex-contributor-li...[2]: https://ourincrediblejourney.tumblr.com/

pfooti 1 hour ago 2 replies      
I'm somewhat bummed about this - I am a big fan of ShareLaTeX, and have been using it for quite some time. I absolutely love that the whole thing is built on an open-source engine (not just the latex part - you can self-host if you want). Overleaf has a lot of also-interesting features, and probably a more robust revenue stream, but it's always a bit of a bummer when the open-source player in the market gets bought out by the closed-source one.

Hopefully that last bit in the announcement remains true: "Both Overleaf and ShareLaTeX are committed to ensuring that all of the open ShareLaTeX code base will remain open source and will continue to be actively developed."

jpallen 59 minutes ago 0 replies      
Hey, James from ShareLaTeX here. Were very excited about what this means for ShareLaTeX and Overleaf! The blog post says most of what we wanted to say, but all four founders from ShareLaTeX and Overleaf are around this evening (were in the UK) to answer questions if you have any. Give us a little while to reply though, since were all trying to have dinner too! :)
andreyk 1 hour ago 0 replies      
I've used both Overleaf and Sharelatex quite a bit, and think both products are great and have different strengths. It was frustrating to have to choose and have my Latex files split between the two, and this niche does not feel big enough to merit two competing great products, so I was pretty happy to hear about this.
mettamage 1 hour ago 0 replies      
Sharelatex changed my life in the world of LaTeX editors. With normal LaTeX I installed 10 GB of stuff, had no collaboration tools and no spellchecker, no good folder structure thing.

It's those small GUI aspects that really made me appreciate ShareLaTeX.

itsmenow 1 hour ago 0 replies      
I personally much prefer using a local setup (editor+plugins+instantaneous compiling, etc), but of course collaboration is painful that way, especially given my collaborators much prefer web-based/shared work-flows. Just few days ago, however, discovered that I can use an overleaf project as just a git repo, then push/pull as I see fit. That is an amazing feature!... everyone get to work how they want. Hope that it stays included with the free version.
jsvcycling 2 hours ago 1 reply      
I've never used Overleaf, but for several years I used ShareLaTeX as my primary LaTeX editor. I've since switched to using LaTeX through emacs but I still regularly use ShareLaTeX's great documentation and if I didn't carrying my Linux laptop around everywhere, I'd probably still be using ShareLaTeX. Hopefully this new partnership won't ruin it.
mk321 1 hour ago 9 replies      
Why it is better than offline editor (like TeXnicCenter) and code repository (like Git)?
kronos29296 1 hour ago 0 replies      
Okay so instead of two competing products we now have one.
Scientists Reverse Brain Damage in Drowned Toddler? newsweek.com
443 points by Deinos  6 hours ago   204 comments top 22
dumbneurologist 3 hours ago 13 replies      
Disclaimer: I am a neurologist

The enthusiastic replies on this thread are understandable, but disappointing to see: we all need to be less credulous regarding the lay science press, and especially the lay medical press.

I would love nothing more than to have this kind of therapy be a reality for my patients. However, I am deeply skeptical of this report.

Why? Because

- hyperbaric oxygen therapy has a big industry of quackery behind it[1][2]

- oxygen is a standard part of medical care and can just as easily be harmful as helpful

- because there is just no way in hell that oxygen is going to reverse cell death.

- this is in newsweek, and not a peer-reviewed journal.

And if there was no cell death, then the recovery is almost inevitable.

Some posters are skeptical because 15 minutes is impossible.

On the contrary: the key point is the temperature. The article says the water was 4 degrees C. That is cold enough that you can recover fully. In fact, the most amazing recovery is also one of the best-documented: with a 66-minute submersion in Utah that was followed by complete recovery[3] (this is a far more interesting article than the original post - it was in 1988, and utilized extracorporeal rewarming). This observation was used to pursue hypothermia in other causes of anoxic injury, which is clinically used today. I'm sure the 66-minute case also got oxygen during the recovery, but to say that it was due to oxygen (which is standard of care) rather than the temperature is silly.

Sorry to be a wet blanket, but this article is just clickbait junk.

1. https://www.fda.gov/ForConsumers/ConsumerUpdates/ucm364687.h...

2. https://www.quackwatch.org/01QuackeryRelatedTopics/HBOT/hm01...

3. http://www.nytimes.com/1988/07/26/science/the-doctor-s-world...

nerdponx 6 hours ago 7 replies      
This is approaching Star Trek levels of medicine. Congratulations to the team who discovered and pull this off, and of course my heart goes out to the family and their child. Drowning is very serious and very scary.

Edit: somewhat unrelated since this girl fell into an unattended pool, but it's important to know the signs of drowning, which are not what you see in movies: http://www.cbsnews.com/news/how-to-spot-signs-of-a-child-dro...

Edit 2: I get that people have a right to downvote whatever they want, but seriously, did I say something wrong here?

davidiach 5 hours ago 1 reply      
>Concluding, the researchers say that to their knowledge, this is the first reported case of gray matter loss and white matter atrophy (types of brain damage) reversal with any therapy and that treatment with oxygen should be considered in similar cases. Such low-risk medical treatment may have a profound effect on recovery of function in similar patients who are neurologically devastated by drowning."

I always believed that brain damage cannot be reversed. If version 1 means reversing it in toddlers, maybe version 10 will do miracles for many other people. Truly amazing and congratulations to the medical team!

madilonts 5 hours ago 3 replies      
Well, this event happens enough that it might be worth studying the benefit of oxygen therapy, but I'd be very careful about the conclusions you draw from this.

Maybe the oxygen had a substantial positive effect, or maybe the child would've recovered on her own. We really don't know, since there are other reports of children who have good neurological outcome despite terrible prognosis [1] [2].

I'm suspicious because of the unusual and/or stereotyped responses in the Medical Gas article and the linked YouTube videos: "doctors said she had 48 hours to live" (doctors don't say things like that) and "this demonstrates that we're inducing 8101 genes!" (ummm, OK...), etc.

Also, be suspicious when something like this hits all the pseudo-news sites simultaneously. It reminds me of the articles that go something like "16 year-old cures cancer...DOCTORS HATE HIM!".

Finally, I'm very happy this little girl has been given a second chance and hope for her continued recovery. However, don't forget that a toddler was left unsupervised and submerged in a pool for 15 minutes. Some people call that an accident; some people call it neglect.

[1] https://www.ncbi.nlm.nih.gov/pubmed?term=3379747

[2] https://www.ncbi.nlm.nih.gov/pubmed?term=10665559

matt4077 5 hours ago 3 replies      
> was in the 5 degree Celsius water for up to 15 minutes before being discovered.

as my professor used to say: If you're going to drown, drown in almost-freezing freshwater.

amykhar 5 hours ago 3 replies      
What frustrates me is that in the United States, most insurance companies won't pay for hyperbaric oxygen treatment for traumatic brain injuries. My son, 26, was injured in a car accident last November. I would love to be able to get Oxygen therapy for him, but cannot.
slr555 5 hours ago 1 reply      
Drowning is from a medical standpoint more complex than the simple notion I grew up with which was in essence "water fills your lungs so you can't breathe air".

In fact drowning does not require filling the lungs completely. Even a volume of a few milliliters/Kilogram of body weight is enough to cause drowning. Additionally, drowning can cause serious damage to the lungs themselves even if the patient survives initial attempts at resuscitation. The alveoli (functional unit of the lungs) are lined with a surfactant that is critical to the exchange of air to the blood stream. Water can severely disrupt the surfactant and impair function not just while the water is present but until the body is able to restore the surfactant layer. Damage to the patient's lungs in this case seems to have been mild enough that the oxygen therapy could do it's job.

Also notable is the 5 degree celsius water temperature (41 degrees Fahrenheit). This water temperature compared with the temperature of an olympic practice pool (~76 degrees Farenheit) is cool enough (though not as cold as many other reports) to trigger the so called "diving reflex" where stimulation of thermo-receptors in the skin triggers a vagal response that shunts blood away from the periphery and to vital organs.

Minimal surfactant damage and the diving reflex (as well as the patient's age) seem likely to some degree to have facilitated successful treatment of the patient.

mechnesium 5 hours ago 0 replies      
This is really awesome. I am curious if this therapy would have been augmented by cognitive enhancers or nootropic substances such as piracetam. Piracetam in particular exhibits neuroprotective effects and improves cerebral vascular function. Several studies have found it to improve recovery following acute/transient ischemic stroke. It has actually been prescribed in several countries for this purpose.


samfisher83 5 hours ago 0 replies      
It seems like they fed the body a lot of oxygen and the body healed itself. I think the body is pretty amazing at regeneration when we are young.
mabbo 5 hours ago 2 replies      
I was worried this would be a case of neural plasticity, where the brain just rewires itself around the damage (which is a thing, and it's super cool). But then I read this part:

> An MRI scan a month after the 40th HBOT session showed almost complete reversal of the brain damage initially recorded. Researchers believe the oxygen therapy, coupled with Eden having the developing brain of a child, had activated genes that promote cell survival and reduce inflammationallowing the brain to recover.

We can reverse brain damage. Wow.

wvh 3 hours ago 0 replies      
[...] and two hours where her heart did not beat on its own.

Impressive. I wonder if there are ways to force this level of regenesis in adult brains with less generative power and neuroplasticity.

I don't think there's anything sweeter to a human being than "here's your child back".

sehugg 3 hours ago 0 replies      
Can anyone knowledgeable about medicine explain this article further? For example, I'm wondering why they waited 55 days to give normobaric oxygen therapy. Wouldn't it be given immediately for a patient with brain injury?
timcamber 5 hours ago 3 replies      
This is amazing. Does anyone think the cold temperature of the water (5C) had anything to do with the feasibility of recovery? I don't necessarily have a reason to think it would be beneficial or not, just a thought that crossed my mind. I don't think it was mentioned in the article.
sunwooz 5 hours ago 0 replies      
Is there data out there about infants in a similar situation who didn't receive oxygen therapy? Is it possible that the developing child brain is what almost solely caused the improvements?
blauditore 3 hours ago 1 reply      
First paragraph:

> she spent 15 minutes submerged in a swimming pool

This seems highly implausible, given she survived. Also, how would they know the moment she dropped in?

Further down:

> up to 15 minutes

Ah ok. From what I know, brain damage starts occurring even after 2-3 minutes without air (for adults), so I suppose it was rather on the lower end. Does anybody know a bit more about this?

rhinoceraptor 4 hours ago 0 replies      
It would interesting to know if better results could be obtained using even more oxygen, in combination a ketogenic diet/exogenous ketones (which would negate the risk oxygen seizures).
msie 3 hours ago 0 replies      
This is cool but will other people try it or will it be another forgotten technique?
flamedoge 3 hours ago 0 replies      
Drowned U.S. Toddler. U.S. is unnecessary here.
zeveb 4 hours ago 0 replies      
Egad the JavaScript on that page is terrible! Every time I scroll to read the first paragraph, it hides the video or something, causing it to scroll away.
TurboHaskal 5 hours ago 2 replies      
How is nationality relevant?
ilitirit 6 hours ago 6 replies      
Does drowning not imply death? Is there different definition for drowning (or death) in medicine?

EDIT: I'm referring to the fact that the title says the girl drowned, not that she was at some point "drowning".

komali2 1 hour ago 2 replies      
Come on, whoever you are. You're seriously going to drop words like infarction, thalami, FLAIR hyper intensity, massive cortical infarction on us? What's the point? Maybe 1% of the people on this public forum will have even the slightest clue what you're talking about.

Luckily someone else has posted a dictionary underneath you, but I mean, surely you're used to speaking to laymen? You couldn't have just said "tissue death" instead of "infarction?"

Apollo Server 1.0A GraphQL Server for Node.js Frameworks apollodata.com
63 points by michaelsbradley  2 hours ago   9 comments top 2
throwaway2016a 1 hour ago 1 reply      
I never quite understood Apollo. I've written GraphQL servers using just the Node.js GraphQL implementation and they haven't been much code. Barely more boilerplate than the Apollo example given.

I'll have to dive into the code myself when I have some free time but can anyone explain to me what value this adds specifically?

I am assuming it does add value and I'm just not seeing it.

danr4 1 hour ago 3 replies      
Couldn't find if Apollo solves the database round trips issue with GraphQL (the joins/n+1 problem). Can anybody shed a light on it?
TCP BBR congestion control comes to GCP googleblog.com
71 points by 0123456  2 hours ago   19 comments top 8
nealmueller 2 hours ago 1 reply      
Todays Internet is not moving data as well as it should. TCP sends data at lower bandwidth because the 1980s-era algorithm assumes that packet loss means network congestion.

BBR models the network to send as fast as the available bandwidth and is 2700x faster than previous TCPs on a 10Gb, 100ms link with 1% loss. BBR powers google.com, youtube.com, and apps using Google Cloud Platform services.

Unlike prior TCP advancements like TCP QUIC which required a special browser, BBR is a server-side only improvement. Meaning you may already be benefiting from BBR without knowing it. BBR requires end users to make no improvements. This is especially relevant in the developing world which use older mobile platforms and have limited bandwidth.

atomt 1 hour ago 1 reply      
A warning if you want to try out BBR yourself:

Due to how BBR relies on pacing in the network stack make sure you do not combine BBR with any other qdisc ("packet scheduler") than FQ. You will get very bad performance, lots of retransmits and in general not very neighbourly behaviour if you use it with any of the other schedulers.

This requirement is going away in Linux 4.13, but until then blindly selecting BBR can be quite damaging.

Easiest way to ensure fq is used: set the net.core.default_qdisc sysctl parameter to "fq" using /etc/sysctl.d/ or /etc/sysctl.conf, then reboot. Check by running "tc qdisc show"

Source: bottom note of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

orware 23 minutes ago 0 replies      
Over the weekend I setup one of the new consumer mesh products that's available, the Linksys Velop, with 9 nodes covering a good sized area between two homes.

One thing I've been noticing though is that there is considerable latency/packet loss at the moment (there is only one wired backhaul at the primary node and all of the other nodes are connected to each other wirelessly).

I've been running Ping Plotter against to all of the nodes and there seems to be considerable packet loss (a few percent) and spikes in latency (average for the two closest nodes to my laptop is about 15 ms, the middle ones out a ways are about 30-40 ms, and the furthest ones are at about 60 ms) but the spikes can be in the hundreds or even thousands of ms.

The area covered is about a 500 ft by 120 ft rectangle more or less (with my house on the bottom left of that rectangle and the other home on the bottom right of that rectangle).

My question would be...would this BBR algorithm help in some way to reduce the latency/packet loss in a situation like this? Or does it only apply for these other situations that Google would normally be encountering/dealing with?

Thanks for the input!

signa11 1 hour ago 1 reply      
here is the acm-queue article/paper on the same thing:



some more sources of information

ietf drafts on the same topic available here:



and a blog post giving a detailed history of various congestion control mechanisms, and bbr as well:


eikenberry 1 hour ago 1 reply      
For those interested in a simple guide on how to try it on your servers, this is the most to the point I've found.


Veratyr 58 minutes ago 1 reply      
> When a GCP customer uses Google Cloud Load Balancing or Google Cloud CDN to serve and load balance traffic for their website, the content is sent to users' browsers using BBR. This means faster webpage downloads for users of your site.

This makes it sound like BRR is only available for Google-managed services on GCP, is that correct? Can I use BRR on GCE servers (which can install the kernel module)? Seems like an odd thing to leave out.

anonymousDan 23 minutes ago 0 replies      
Sounds like it would be great for wireless ad hoc networks.
QUFB 47 minutes ago 0 replies      
I'd much prefer to see GCP add IPv6 support, which is sorely lacking.
How an ex-FBI profiler helped put an innocent man behind bars latimes.com
24 points by ClintEhrlich  1 hour ago   4 comments top 3
danso 18 minutes ago 0 replies      
Worth noting that the submitter of this story, user ClintEhrlich, played a main role in freeing the wrongly-convicted man. He posted about it on HN a couple years back, in a highly upvoted and discussed thread (500+ upvotes, 200+ comments): https://news.ycombinator.com/item?id=12010760
gumby 4 minutes ago 0 replies      
The big crime is how little epistemological support there is for some of the big forensic tools (fingerprints, DNA, bite marks, arson spread, etc), how little interest there is in researching these areas, and how trusted they are.

But I am curious: apart from TV shows, how important is this kind of evidence in most trials? Is it actually uncommon?

neaden 20 minutes ago 1 reply      
Profiling, like much of "criminal science" such as Arson Investigation is a large part unsubstantiated bullshit. It's amazing how poorly regulated all of it is and how many innocent people are in prison, or executed, because of it.
Anomaly Detection of Time Series Data Using Machine Learning and Deep Learning xenonstack.com
22 points by myth_drannon  1 hour ago   6 comments top 3
VHRanger 35 minutes ago 2 replies      
> This is because the oscillations are dependable upon the business cycle.

No, no, no, no. The "business cycle" is a misnomer; it's not a cycle. It doesn't fit any descriptions of a "cycle" except for the fact that it sometimes goes up and sometimes goes down.

Saying that it's a cycle implies you can time downward trends like crashes or predict their intensity, which, anyone in finance or maroeconomics will tell you is impossible

baldeagle 35 minutes ago 1 reply      
TLDR: This is a high level overview of all the terms in the topic (Anomaly, Time Series, Deep Learning). If you have a passing familiarity with the problem space, there is nothing new here.There is an ad for xenonstack services at the bottom.
gargravarr 1 hour ago 0 replies      
The first few words of this title sound way too sci-fi. Very disappointed when I read the whole thing together.
Things to learn in React before using Redux robinwieruch.de
221 points by callumlocke  7 hours ago   71 comments top 9
k__ 5 hours ago 6 replies      
My first project with React was a mess, mostly because of Redux. Not because it's bad, but because the lead dev was adamant about using something Flux like. First we had Flummox, than he rewrote everything in a week with Redux, that was 2 years ago, before Redux had sane async helpers.

In my current project (where I'm the lead, haha) I'm trying to got state/props all the way.

I think for most apps it's more than enough AND it's easier to get started for new devs.

React is really easy. I mean coming from Ember and ExtJS, its API is a walk in the park and you can get really far without extra state management.

One thing that is important for this approach, minimize nesting.

You don't want to pass down everything from the app, to the screen, to the list, to the item. etc.

Instead of doing:

 <List items={items} />
do it like that

 <List> {items.map(i => <Item item={i}/>)} </List>
No nesting of screens (top-level components), no (hidden) nesting of components. This may seem kinda strange, because the first example has no dependency to the Item component, but it gives the component that uses the List direct access, which simplifies props-passing immensely.

This doesn't work for all apps, but it's a good starting point.

I ended up with 2 folders, screens and components, that are both simply flat lists of component files.

sghiassy 4 hours ago 2 replies      
The rush to use Redux for every React project is one of the most annoying parts of the React community; using a tool just to use it, before understanding if you need it or not. This article summarizes a lot of good points.
acemarke 3 hours ago 0 replies      
As usual, this is an excellent article by Robin. Well-written, and full of great information.

It's worth noting that both the React and Redux teams (including Dan Abramov and myself) agree that you should focus on learning React first, and _then_ learn Redux. That way there's fewer new concepts and terms to learn at once, and once you understand React, you'll have a better appreciation for what kinds of problems Redux can help solve. That's not to say you _can't_ learn them both at once, just that it's the suggested approach that will work best for most people.

noncoml 1 hour ago 0 replies      
IMHO Redux is a heavy, clunky and awkward practice that is only holding React back.

My advice to anyone reading this forum is give MobX a try before deciding to commit to Redux.

hippich 5 hours ago 0 replies      
As a general rule do not use `this.react` inside `setState({ ... })` - this will cause you problems eventually due state updates being async.

If you need to use state to set new state, use functional callback instead - https://facebook.github.io/react/docs/react-component.html#s...

tchaffee 6 hours ago 0 replies      
Worth a read. It summarizes in one place much of what I learned bit by bit from various other articles.
bernadus_edwin 2 hours ago 1 reply      
People should learn eventEmitter or PubSub before learn redux
captainmuon 5 hours ago 6 replies      
I wish React would come with a standard way to handle Ajax (or a convention or semi-standard library would emerge). (Edit: "comes along" in the sense that immutability-helpers, redux and create-react-app come along with react. I'm not proposing to add anything to the react module. I'm not the world's best expert on react, but before downvoting can you please assume I know a little bit of what I'm talking about?)

Something that:

- Can fetch JSON according to criteria from an API

- Caches stuff locally (just in memory, or in local storage) in case of duplicate calls

- Deals with multiple concurrent calls, and merge them (e.g. fetching 1 and then 2,3,4 before 1 finishes -> either cancel 1, or wait until it finishes, and then fetch only the last requested item.

- And all the stuff I can't think about right now, like cancellation and timeouts

Plug your pure component into one of these, tell it about your API, and you're done. It's really error prone to write these containers yourself. And I think Redux doesn't really help much with Ajax.

TotallyHuman 5 hours ago 7 replies      
I don't understand why anyone would use React at all with the ridiculous license.
Daydream Labs: Teaching Skills in VR blog.google
25 points by janober  2 hours ago   6 comments top 3
hyperion2010 1 hour ago 0 replies      
This will be extremely useful for teaching how to use 'always on' equipment like big electron microscopes where the cost of training someone is enormous if you have to give up time where the equipment could be running.
andkon 1 hour ago 0 replies      
This is funny. All I would've thought you'd need is a score, which you'd increment if you get a shot of espresso out quickly, or decrement if you touch a hot pipe. Let people figure out the rest!
vorotato 1 hour ago 1 reply      
"no matter what warning we flashed if someone virtually touched a hot steam nozzle, they frequently got too close to it in the real world, and we needed a chaperone at the ready to grab their hand away. "

Obviously they need to "die" in game, a few times starting the whole thing all over from the beginning and they'll never "burn" their hands again. I wish I could tell the author this....

On Password Managers tbray.org
133 points by tmorton  2 hours ago   139 comments top 33
tptacek 2 hours ago 10 replies      
The 1Password situation is complicated, and is a lot less sketchy than Bray's summary would lead you to believe. 1Password has not in fact phased out their native applications or required people to use 1Password.com to store passwords (it would be insane for them to do so).

There are four issues that I'm currently aware of with 1Password:

1. They've converted from flat to subscription pricing.

2. They're pushing people to a 1Password-managed cloud sync system instead of the a la carte sync they were doing before.

3. They're promoting cloud vaults and hiding local vaults, and the Windows version of 1Password has apparently never used local vaults.

4. Now that they have 1Password.com, first-time enrollment in 1Password requires you to interact, once, with 1Password.com.

Of these, only (4) is a serious security concern. Their last release further eliminated the native app's dependency on 1Password.com. I'm confident they'll get all the way towards decoupling them, but I'm not them, so grain of salt.

I have no relationship with 1Password other than as a happy customer and as someone who does research in the field they work in. Having said that: I strongly recommend that you be very careful about what password manager you choose to use. The wrong password manager can be drastically less secure than no password manager. I recommend 1Password, and there's currently no other commercial password manager that I recommend. I'm sorry I can't go into more detail than that. :(

chipotle_coyote 3 minutes ago 0 replies      
I'm a 1Password user, and have synced my vault between devices through both Dropbox and iCloud at various points. I can't help but feel like either there's something I'm missing or something everyone else is missing, which statistically means that it's most likely me. But:

When I sync with iCloud, Apple can't read my vault--even though it's on their servers, it's strongly encrypted with my passphrase, and the encryption/decryption happens on my devices.

When I sync with Dropbox, Dropbox can't read my vault--even though it's on their servers, it's strongly encrypted with my passphrase, and the encryption/decryption happens on my devices.

When I sync with AgileBit's own cloud... doesn't the sentence go exactly the same way? Quoting from their own current web page: "Every time you use 1Password, your data is encrypted before a single byte ever leaves your devices."

So even if the vault is on AgileBits' own servers, isn't it _no more and no less secure_ than the third-party syncing solutions they offer? Maybe that's not the case, and things actually function differently--but I haven't seen anyone describe why that would be the case. Again, maybe I'm just missing it. But I keep missing it. And it's not in Tim Bray's article, either. He's fine with putting it on somebody else's server if that server is run by Dropbox, but not if it's run by the company that he's trusting to encrypt it against people hacking Dropbox? How is this is materially different than using iCloud, Dropbox, or any other solution that puts a copy of my vault on someone else's servers for syncing purposes?

If the real argument is that there should always be a way to use a password manager with _no_ cloud-based syncing solution, I'm on board with that; it'd be a requirement for some businesses. But that doesn't seem to be the argument that's being made. And if the real argument is that you don't like subscription pricing models, that's fine. I don't like them, either. But that's not an argument about security--it's an argument about pricing models.

tedmiston 2 hours ago 7 replies      
Just to be clear, it's still 100% possible to keep your 1Password vault in Dropbox etc and not use the SaaS version [1]. I felt like this fact was buried in the article.

Edit: Here's the link to buy the standalone license [2] which is hard to find on the site now.

In a post from the founder one week ago [3] he said, "We know that not everyone is ready to make the jump yet, and as such, we will continue to support customers who are managing their own standalone vaults. 1Password 6 and even 1Password 7 will continue to support standalone vaults."

[1]: https://support.1password.com/sync-with-dropbox/

[2]: https://agilebits.com/store

[3]: https://blog.agilebits.com/2017/07/13/why-we-love-1password-...

vikingcaffiene 6 minutes ago 0 replies      
Good security hygiene is like a diet or exercise plan: the most effective one is the one you will stick with. Most users don't follow good habits because its a giant pain for non technical users to get set up. 1p's subscription plan is aimed squarely at those people and I think its a great idea. It's reasonably secure and easy to set up everywhere. That is a big deal in my mind. Yes, its not bullet proof but its a 100000% better than what the current status quo is.

Additionally, managing your own password vault is a lot like managing your own email server. There's advantages but I feel that the disadvantages are substantial. For one, the likelihood that you, one person, are going to do a better job of securing your stuff than a dedicated team is optimistic at best. Keeping your password vault safe is literally this companies full time gig and they have entire teams dedicated to it. Do I think they are infallible? Of course not. I'm not an idiot. But I think they are going to do a better job than me at keeping my stuff safe. I happily will pay for that every month.

The authors point about the 1p web portal is a good one. I don't use it out of similar concerns. Besides that, I really could not be happier with 1p as a password management solution. They have a good track record (no hacks that I am aware of) and I want the company I trust with literally the keys to my kingdom to be profitable and motivated to keep improving.

pixelmonkey 1 hour ago 3 replies      
I use Enpass on Linux, Windows, OS X, Android, and iOS. I also use the Chrome extension. It has a similar user experience to 1Password, but is actually serverless (you sync your encrypted blob to a cloud service of your choice, or not at all). I wish Enpass were open source, but I can understand their decision not to make it so -- its desktop application is free and its mobile apps include a small perpetual license fee ($10 per user, one-time). The format of the encrypted blob is a simple SQLCipher database that uses your (memorized) master password as the secret key, so even though the application is closed source, the data seems to be stored in an open format. Overall, it's probably the best option on the market in a very bad category of software. After evaluating them all, IMO, you should run away from 1Password, Dashlane, Lastpass, etc and use Enpass instead. Even better if the place you sync your encrypted blob is protected by strict 2FA and has good (enforceable) privacy policies.
harrisonjackson 1 hour ago 0 replies      
With a couple UI/UX enhancements, Apple could take over the iOS/MacOS marketshare of these products with Keychain. It's already possible to use keychain in your workflow for password management, it's just not super convenient.

I'd switch from Lastpass, if Apple made it easier to autofill and autogenerate passwords and added support for sharing / teams.

braink 2 hours ago 1 reply      
I totally agree with Tim Bray's post. The bottom line is that the pestering that I get from AgileBits makes me, as a customer, really doubt their integrity after trusting them for years. Why are they trying to force me do to this? Obviously because they want more money (but are betraying their own oft-stated security attitudes) and maybe even for some other reason (the backdoor thing?).
moskie 1 hour ago 2 replies      
The one place that 1Password doesn't meet my needs is in ChromeOS.

The browser plugin requires the machine you're on to have the 1Password app running in the background, which is how it gets its data from the local (and synced) vault. But there is no 1Password ChromeOS app (and I don't think it's really even possible for there to be something like that in ChromeOS), so the browser plugin does not work in Chrome on ChromeOS devices.

A while back, I think the 1Password synced vault files would also have an HTML file you could load up in a browser, which would then communicate locally with the encrypted vault to gain access to your passwords, which was a workaround on ChromeOS. I'm not sure of the security implications of that process, but it isn't supported anymore.

I really like the locally synced vault with browser plugin functionality, but the fact that there isn't a solution on ChromeOS has been a sticking point for me. I've gone the route of having Google store 1Password generated passwords via Chrome's password features, for sites that I regularly access via ChromeOS, which works, but feels excessive.

trjordan 2 hours ago 1 reply      
If I understand correctly, the main problem here is that if a password manager at some point asks you for a password in an online environment, they're subject to coercion. This is especially dangerous if you're using auto-updating code like Javascript in a browser or code on a remote service, because it could get backdoored at any time and you wouldn't notice.

Isn't the real problem auto-updating code with access to a network? 1password.com is certainly another vector that fits this description, but if you don't trust AgileBits to manage 1password.com securely, why would you trust them to manage the app on your machine securely? Or the auto-updating Chrome plugin?

I'm not denying that there's more surface area by creating a login, but I think it's a false dichotomy to say that the app is "offline" and the website is "online". They both have network access, and if AgileBits or a random hacker can change the app's code, they'll do that. That change will be mindlessly delivered to your computer, and the bad guys will have all your passwords.

LordHeini 2 hours ago 3 replies      
At our company we use keepass2 with a db file synced by dropbox.Works nicely. Keepass can save all sorts of stuff alongside passwords (like credentials, api-tokens...) and there is an app too (for android at least).Might get a bit clunky if lots of people change a lot of stuff all the time but for us it is not a problem.
rrix2 1 hour ago 0 replies      
More and more, I'm recommending that friends and family get a Mooltipass[1]. It's open source, it works on any platform that supports USB HID (including mobile devices using an OTG cable), it's got multiple browser plugins, and it allows you to have "two factor" auth by seperating the pin-protected crypto key from the device itself using smart cards.

The device can be backed up, and the cards can be backed up too (since unfortunately it's not doing the crypto on the card, the card is just a verifiable pin-protected way to store the AES key) and it's an obscure enough looking device that it's not yet an easy theft target.

[1]: https://www.themooltipass.com/

danr4 2 hours ago 2 replies      
The only cloud based password manager I'm willing to use is Dashlane[1]. It's supposedly "zero knowledge", and although you can never be 100% there isn't some bug waiting around to be exploited, it's a compromise I'm willing to make (the lesser evil). They also have several complementing features like encrypted notes, auto saving receipts, credit cards, batch password changer with quite a few major sites.

I'm not affiliated with them, it's just I never see them on HN compared to mainstream applications like LastPass, 1Pass, OneLogin and such.. and I think their services are better. Plus their support is great.

On the other hand, if everybody starts using it maybe it'll become a bigger target for hackers. so don't tell everyone :)

[1] http://dashlane.com

danirod 2 hours ago 2 replies      
I've been using password managers (KeePass, in my case) for about a year and all I can think is, why I didn't start using them earlier. It is cheaper to generate a long, random password using alphanumerical and special characters than trying to think a clever yet memorable unique password by myself, and probably more secure.

Plus, it's true that you end up storing other sensible things that are not passwords, such as API or recovery keys, because it's acts like a vault.

jaclaz 2 hours ago 1 reply      
IMHO this part is where the nail is hit right on the head:

>Why is AgileBits doing this? For the same reason that Adobe has been pressuring its customers, for years now, to start subscribing to its product, rather than buying each successive version of each app. A subscription business is much nicer to operate than one where you have to go out and re-convince people to re-buy your software.

It is the part (common to many other software vendors) where they stress the "I am doing this for your own good" that irks me.

You want to change your business model? Fine.

Do you believe that this new one is better? Fine.

Do you want to convince me that you are changing the "old" model (which BTW you used until a nanosecond ago) becasue it is better for me? Hmmm.

pc86 1 hour ago 2 replies      
Does anyone know anything about Dashlane? I had a free commercial account from a previous employer and it seemed nice, other than the popup every time you logged in to an unknown website asking you to save your credentials. I'm pretty sure that was configurable, though.

I don't see Dashlane spoken about much in these conversations (I have no affiliation).

malchow 2 hours ago 1 reply      
I totally missed this switch by AgileBits. Does anyone know how to ensure that the data file continues to be synced to Dropbox or iCloud, not AgileBits? (Looking into my configuration, it would appear that AgileBits has silently moved my data from iCloud to the AgileBits cloud.)

EDIT: Found: https://support.1password.com/sync-with-dropbox/

laurencei 59 minutes ago 1 reply      
I have 1Password and I love it.

But my biggest fear that I have is; if my laptop was ever pwned in some way, due to some noval 0-day etc - is that everything stored in 1Password could be compromised. But more importantly - the hackers would have an address book of banks, servers, databases etc that I have access to.

I dont know if there is a solution - but I feel it is like putting all your eggs in one basket.

ctingom 20 minutes ago 0 replies      
I'm still using 1Password Version 3.8.22 on my Mac. Should I upgrade?
corybrown 1 hour ago 1 reply      
I've moved from LastPass to KeePass, but the biggest thing I miss from LastPass (other than the better browser integration) is a good CLI client. Lastpass-cli is great, and kpcli just isn't.

Anyone have a recommendation for a good CLI client that isn't `pass`? (I don't want to deal with GPG)

raverbashing 2 hours ago 0 replies      
I use password managers, but I think the usual way of thinking about them is wrong

Besides password reuse being not recommended, the main issue is: most websites don't give a eff about whether they store your password correctly or not

It's a trust asymmetry, they ask you to provide a password (and most ask one with a lot of BS restrictions) THEN md5 it and put it on the database, or worse

And as said by the article (and implied by the above paragraph), there are better ways of obtaining someone's password - pwd managers are not the weakest link, at least not now

noja 40 minutes ago 0 replies      
Someone please write a front-end that uses Vault as the backend. Then a plugin to integrate with Firefox :)
bsilvereagle 2 hours ago 2 replies      
Encryption Wizard [1] solves issues 1-4, but is severely lacking on #5 (device syncing). It also has no mobile support.

I've performed a cursory search to see if any OSS password manager comes close to EW on features, but didn't find anything:

* Supports CAC encryption/decryption

* Allows you to store contacts public certs

* Allows keys to decrypt

* Generates passphrases

* Allows multiple keychains to be opened at once

If anyone is looking for a (probably not profitable) OSS project/business, I would pay probably upwards of $100 for a perpetual/source available license for an Encryption Wizard clone with a mobile client & some built-in support for syncing.

[1] https://www.spi.dod.mil/ewizard.htm

malchow 1 hour ago 1 reply      
Is it still the case that the 1Password Master Password is never transacted over the web, even on 1Password.com? The encrypt/decrypt is done in the browser?
peterkshultz 2 hours ago 6 replies      
Any password manager recommendations such that people don't need to deal with 1Password's cloud-based storage?
akurilin 2 hours ago 0 replies      
Any alternatives to 1Password / LastPass that support Google's SSO? I tried TeamsID before and I was ok but not nearly as feature-full as I was hoping: e.g. no automatic auto-fill on the page you land on, no password generation for new websites.
Sweetlie 1 hour ago 2 replies      
I'm surpised nobody cited lesspass, https://lesspass.com/#/

Nobody store your password it's pure stateless, you can access the software by the official website, your website, web plugin, the terminal

see this blog: https://blog.lesspass.com/lesspass-how-it-works-dde742dd18a4

deedubaya 2 hours ago 0 replies      
I thought 1Password confirmed that the cloud based storage is the default for new users -- existing and more security conscious users can still use whatever data store they choose?
amelius 2 hours ago 0 replies      
If only I had a keyboard with an NFC chip, and some password software on my phone ...
dawnerd 2 hours ago 0 replies      
1password should just release a paid (subscription even) self-hosted version. They already have the domain bit in their apps, I can't imagine it being too much effort to work with any host.
xoa 1 hour ago 0 replies      
I agree with where he's coming from overall. Password managers [1] are a very important practical security measure that general users should be utilizing for the foreseeable future, and one where a good UI (as 1P and other commercial ones offer) is a genuine security feature, not just a nice-to-have, because their security implications are directly tied to how much users utilize them. That means while technical users will always have solid OSS solutions no matter what, it's worth paying attention to what major proprietary ones are doing too. This shouldn't be dismissed purely because KeePass variants or whatever exist.

And I definitely don't like the business incentives subscription models generally create when it comes to standalone software development (as opposed to a server-based service), and so far the major moves to them I have experienced (such as Adobe's) have reinforced my concerns. While in the short term individual personalities can of course do whatever, I think in the medium to long term it's very hard for development direction to stay divorced from whatever the direct economic incentives of the business model are. In turn thinking about that is one of the more important factors in thinking about to what degree a company can be depended on over the years. Because:

1. Humans have a strong tendency to favor the status quo unless there is a disruption (HN crowd likely deals with this frequently, such as with the immense power of defaults in UI design).

2. Low constant noise triggers less consideration then occasional larger spikes, even if the former adds up to more in the same time period.

3. There is direct loss associated with stopping.

4. Lock-in increases.

subscriptions are well known to be a lot stickier and less sensitive to stagnating software, pricing changes, etc., then per-version purchases are. Companies can put out "being able to focus on the longer term!" but fundamentally subscriptions remove a significant form of customer-oriented hard discipline and incentives. Some devs might be able to continue the same without it, but many clearly cannot. And I want to emphasize that this isn't at all necessarily because of any maliciousness or even greed, no "haha now we have them where we want them". It's just that a lot of humans will lose focus without some sort of hard-to-subvert, reasonably fast outside feedback loop. Subscriptions also encourage feature development and testing towards a single vertical ecosystem, even if other approaches would be perfectly viable.

AgileBits says they're keeping standalone licenses, but I see nothing about reasonable feature parity. I also agree that one of the best ways to assuage concerns is full honesty, including acknowledging obvious conflicts of interest, and in that light I agree it would have been valuable to see at least something about how this boosts their revenue, and how they're aware of the risk of making standalone licenses second class citizens and will watch for it. They've been a solid company and made a solid product overall however, so I'm willing to give them the benefit of the doubt here for now. It'd be a shame if they ultimately do go sub-only at some point, even if data can be trivially dumped to other programs.

Maybe by that time though progress will be made on finally getting websites away from password authentication entirely and in turn PMs can be rendered mostly a historical artifact.

As as an aside, though I think this blog is aimed at a general audience there are a few misunderstandings that are significant, since they're not that complex but feed misunderstandings. For example:

>In the 1Password app's sync model, however, one assumes they use the pretty-secure HTTPS-based APIs for each of these products, machine to machine, no JavaScript in the loop.

The author himself correct states that in 1Password's (or KeePass or any other client based encrypted database setup) case they're using purely offline-app endpoint encryption, and part of the entire point of that is that the transport mechanism is irrelevant. There is no need to trust anything beyond what exists on the endpoint. This matters because it relates to some of the other concern points he raises, not just cloud storage location but for example "backdoor code in a future 1Password app release that sends the goodies to the enemies". An endpoint password manager that allows abstracting sync from the application itself, at least optionally, in turn can be isolated from any net access (and/or any attempts monitored) which reduces that threat profile as well.


1. Effectively a mediocre reimplementation of public key auth on top of 90s-era website authentication practices that have proved sticky.

lifeisstillgood 59 minutes ago 0 replies      
It feels like a comparison of the available options out there is something "useful to the world".

I am not too sure how to do that but would value comments from people who have used open source password managers, or even read the code!

Shall we?

My assumptions for this list of recommended apps is at minimum:

- a single file in a well-known format is stored on a cloud service, and can be read / updated from different devices and platforms

- as this is encryption, we prefer open source code and trusted binary makers

My experience:

I use pwSafe on iOS (binary from some random guy). This backsup to dropbox.

I have a python script based on pypwsafe3 that can read the file on Linux. I have not yet tried BI-directional

I know pwSafe is based on Schneier's windows version, but frankly I have not tried to find the code or validate the binary.

So - is it worth building some kind of knowledge base here?

netrap 2 hours ago 1 reply      
How can you talk about 1Password but not KeePass?
guelo 1 hour ago 1 reply      
Against all recommendations I reject all password managers. I feel like all security software is eventually compromised, most frequently by business folks as in this case. Instead I use a tiny notebook that I keep in my wallet. I pick long 12+ character passwords myself, not super randomized but I haven't heard of a brute forcing attack in a long time. It allows me to easily meet weird password requirements. I feel pretty secure that it's not on a computer. Admittedly I also use Firefox's password manager to avoid typing them in all the time. I trust Mozilla for now, though I wouldn't be surprised if they are eventually compromised as their market share goes down.
Front-End Walkthrough: Building a Single Page Application from Scratch poki.com
43 points by rizz0  3 hours ago   23 comments top 7
hitgeek 1 hour ago 0 replies      
"We knew we wanted to build a single page application (SPA) in order to have more control over the user experience of our website, making it as smooth as possible. On top of this, it also helps speed our website up since theres no longer a need for full page reloads. We only need to load the data that we dont have yet, and then re-render the page."

I'm bothered by this perception that SPAs inherently provide a better user experience. They certainly can provide a different user experience, but "better" is entirely up to the developers. I'd argue its actually quite hard to create a better UX in an SPA than the simple page based UX metaphor everyone is used to that the browser provides. Netflix is an example of a great UX from an SPA, nba leaguepass is an example of a disaster SPA.

Also there is no inherent speed boost from an SPA. Anything that is slow on the server will still be slow, and its up to the developer to create a good UX for latency. In page driven applications, the browser provides a fairly standard UX for page loading that most people are used to. In SPA apps, the developer needs to roll this themselves. Widgets popping in all over the place at different times, moving things all over the page, is a common UX I see in SPAs that is not good.

throwaway2016a 1 hour ago 2 replies      
This seems like a great writeup with some good information in it.

But to be picky, the title needs work. It isn't as describe. In fact it contradicts itself.

The title is:

> Front-end Walkthrough: Building a Single Page Application from Scratch

Half way down the article:

> When it comes to building a SPA, you can either do things from scratch or use a library to help you out. Building from scratch is a lot of work and adds a lot of new decisions to be made, so we decided to go with a framework.

I would be very interested to see an article in 2017 that is actually from scratch. Bonus points for not using a ES6 transpiler. Like "Linux from Scratch" (the book)... useless from a practical standpoint but awesome from a learning standpoint.

Edit: as a side note, when I started making web apps, jQuery was still in beta so I didn't even use it. A lot has changed, obviously, for one SPA didn't really exist then.

z3t4 13 minutes ago 0 replies      
I want to point out that it works perfectly fine making a JavaScript web app in just vanilla JavaScript using NodeJS as server and the browser as client. You do not need any frameworks!! Vanilla JavaScript works for both small projects and big projects. And it performs well! (at least compared to the popular frameworks) and it's very nice to debug! There is no complication step! And your code will be supported for ever unlike the framework's that will make your code obsolete within a year or so.
daliwali 1 hour ago 7 replies      
Alan Kay was right, programming is pop culture.

It's increasingly harder to find non-legacy single page apps that aren't using React or Angular (Vue and Ember are distantly trailing behind). Corollary: it's increasingly harder to find jobs for anything other than React or Angular.

After technology du jour becomes popular, people will use it not because it's good or even necessary, but because it will ensure their employment. And then the rationalizations sink in afterwards.

water42 1 hour ago 1 reply      
yet another web framework medium article with a misleading angular vs react comparison.

it isn't angular 2 anymore. it's just angular. it's been out for over a year and if people were having problems running it in production, we would hear about it. there are many sites running angular 2-4 in production, google it and see for yourself. just because google didn't rewrite gmail in angular doesn't mean you shouldn't use it.

I wonder if there is some highly ranked google search result that spreads this misinformation, months after some of these points were valid.

CryoLogic 1 hour ago 1 reply      
Most of the issues cited with Angular and React aren't really issues in EmberJS right now. I always find it interesting that so many people jump right into frameworks that have design philosophies against maintainability.
peter_retief 1 hour ago 2 replies      
I am doing SPA with vuejs and django backend, its a huge step in the right direction as far as (my) web development goes. Initially I was sceptical of webpack, now I see what an asset it is in compiling static content and many other handy features. I prefer vuejs to react or angular but I haven't really given them much time, maybe one day
Kaisa Matomki dreams of prime numbers in short intervals quantamagazine.org
43 points by heinrichf  3 hours ago   15 comments top 6
mixedmath 48 minutes ago 0 replies      
I like many of quanta's articles on mathematicians and scientists, and I appreciate their contributions to science journalism. I would like to add one aspect that I think this article downplays, but which is understandable to a large audience.

Kaisa and Maksym study multiplicative functions, i.e. functions which satisfy f(ab) = f(a)f(b) if a and b are relatively prime. A big part in Kaisa and Maksym's fundamental technique boils down to understanding completely the behavior of f on small primes, and giving somewhat loose bounds on the rest. Central to their success is their ability to make quantitative the intuitive statement that "most numbers have lots of small primes as factors". This required a few new ideas, but the nugget is quite simple, I think.

brianberns 1 hour ago 1 reply      
As a layman, I wonder about the connection between primes and random numbers. Obviously, the primes are not actually random, but they seem to exhibit a lot similarity to random numbers in that they are chaotic over short intervals, but seem smoother and better behaved over long intervals.
tarre 34 minutes ago 1 reply      
I still remember Kaisa telling in primary school, that Euclid's proof of infinite number of prime numbers is often considered as the most beautiful proof in mathematics.
seycombi 36 minutes ago 0 replies      
Why prime numbers are usefull to Web Designers https://www.sitepoint.com/the-cicada-principle-and-why-it-ma...
sevenfive 1 hour ago 1 reply      
Their example uses log_10 instead of ln
mudil 1 hour ago 4 replies      
Can anyone explain to me why primes are so important in cryptography?

I understand that they are used for factorization, where it is easy to multiply two primes and get a number, but it is difficult from a big number to find what two primes were used to get it. So, the question is why not to have a big database of primes, and when we have that big factorized number we try to divide that number by each prime from the database and see if the result is another prime from the database? Wouldn't that work?

Say Goodbye to Spain's Three-Hour Lunch Break citylab.com
166 points by danso  5 hours ago   134 comments top 21
mcjiggerlog 4 hours ago 7 replies      
Literally nobody here takes 3 hour lunch breaks. A lot of offices have 2 hours, yeah, but this is becoming less and less common regardless. Small shops do close for 3 hours in the middle of the day, but they are open till about 9pm which is a lot more useful than being open in the middle of the day. It also should be noted that there are actually 5 meals in a spaniard's day. There's a mid-morning and late-afternoon snack, which is the actual reason for the "late" lunch and dinners.

Also, all changing the timezone would do is give us useless light early in the morning and less light in the evening to go out and play sports, hang out in the park etc. People would still eat at the same time regardless. It gets dark at 6pm in the winter and 10pm in the summer. It's not exactly crazy.

professorTuring 4 hours ago 1 reply      
I'm from Spain and this article does not represent Spain. Let me expand on this.

The typical workday in Spain (no matter if you are a worker or in an office) goes from 7:00/9:00 to 15:00/18:00. This is usually a 8 hours work day (a lot of unpaid extra hours are quite typical in any IT job).

Usually there are two stops before lucnh, one at the beginning of the day, usually 15 minutes to have a coffee with coworkers and another one of 15-20 minutes at midday (11:00/12:00) to have a piece of fruit or just another coffee.

At lunch there are two options, the ones that leave job at 15:00 usually eat at 15:30 - 16:00 (quite late but quite common), the rest takes an hour to have lunch (usually two dishes and dessert [a bit too much I admit]) and they are allowed to leave normally when they make 8 hours at work.

There are a lot of flexibility if you are not public facing so you can play a bit with the arriving/living time (usually you are allowed to arrive up to 10:00 in the morning).

Only people who lives really close to home (up to 10/15 minutes) goes there to have lunch with their family and they usually also take a power nap of 15 minutes more or less.

Spanish siesta is for Fridays and holidays.

jorgemf 4 hours ago 4 replies      
This is a non-sense of article as other spaniards have commented. The only companies that close 3 hours in the middle of the day are the shops, usually small shops. They do it because for long part of the year no one goes to the street from 2pm to 5pm, because it is so fucking hot. So instead, our culture close the shops at those times but keep them opens until 9pm or later. It is very common to see shops full of people from 7pm to 9pm.

Some offices have 2 hours lunch break, which I also find non-sense. But 1 hour is usually not enough for us for lunch, we don't eat sandwiches, we are used to do a proper meal with 2 dishes, dessert and coffee.

We also eat quite late, around 2pm or 3pm, but that is mostly because our time doesn't match the Solar time. We should have 1,5 hour less in our clocks. Which makes we really eat at 12:30-1pm of sun time. That is what most countries do as well.

We do take naps, only in summer on our holidays. Usually from 3pm to 5pm, because as I said, it is so fucking hot in summer at those hours. But any foreign who is here in summer does the same (actually probably more % of foreigns take naps than spaniash). People who also have an intense 8hours day job from 7am to 3pm they also take naps, because they wake up quite early and the social life starts at 6-7pm.

Bonus: we don't add chorizo to the paella. The chorizo is mostly for the bbq

franciscop 3 hours ago 3 replies      
I am sorry but this is totally backwards. As a Valencian Spaniard I want to set the record straight.

First, siesta is not dead. However, it is considered mostly a holiday luxury or reserved for small shops. It is normal in the extremely hot summer, after lunch we stop for 1h or so. While I do not do siesta, I would say about half+ of the people I know do it (holiday times). I would also say it is highly coupled with the summer heat wave though, in winter not so many people do it.

Then, late lunch. First off, Spain is actually in the "wrong" time zone. We are at the same longitude as UK but it's 1 hour later here. What this means in practical terms is that when it's 2pm here, it is in 1pm relative to the light-time/circadian clock.

However that still does not totally explain the really late lunches. The main reason for that is that we have "almuerzo" at around ~11am (note: the word "almuerzo" in some zones in Spain is "lunch", and in some others it's a mid-morning snack, or second breakfast as I like to call it. I mean here the mid-morning snack). It depends on the job, on the person and on many things, but it's not uncommon that it consists of a mid-sized sandwich of Spanish bread. This [random] image is a quite accurate picture of the places where almuerzo is had and on the type of almuerzo we eat: http://cdn.traveler.es/uploads/images/thumbs/es/trav/2/s/201...

So as you can see, we have a different, important course around 10:30-11:30 that makes us not remain hungry until lunch. Some people prefer lighter snacks of course. Also, breakfast is normally light as we don't have to worry about hunger since we have our almuerzo later on. Normally you'd combine breakfast and almuerzo; if you have too much breakfast then you have a light almuerzo or nothing at all, and the same otherwise.

faragon 21 minutes ago 0 replies      
That's separatist propaganda: Catalonia, a Spanish region where separatists are ruling and expect to do a referendum against the rule of law in October so the region can exit the Spain and the European Union and be "free", is "better" than the "lazy Spain". Rationale: in Spain there is no such thing as generalized three-hour lunch break. In fact, most industries have short lunch pause (e.g. 7 to 15:30h), and most offices take just one hour (e.g. 9 to 18h). That's only true for shops (e.g. 9:30..13:30h and 17..21h), and that's how shops work almost everywhere, not a specific thing of Spain.
jordigh 4 hours ago 0 replies      
This is also part of Mexican culture, sort of. In Mexico City, we would break for lunch around 13:00-14:00 and just take our sweet time, no pressure at all to go back into the office. The workday would end around 18:00-19:00. I think this has its pro and contra, you get a nice big break in the middle of the day, time to enjoy your meal, but also you end up staying in the office way too late. Commute times of one hour are not unusual in Mexico City either, so your whole day is basically gone on the job.

Note that eating schedules may also be different from the Anglosphere. It took me a while to get used to a light meal around noon and a heavy meal around 17:00, which seems more common in the US and Canada. In Mexico the big meal of the day is around 14:00 and you might have a lighter meal, almost a snack, around 21:00 before bed.

Going home for lunch to eat with your family is a bit of an old-fashioned custom in Mexico, but I hear some people still do it. None of my coworkers in the tech sector did, though. We were all mostly young too.

readhn 4 hours ago 11 replies      
In the startup arena im in (USA) folks often work 8-9am till 7-8pm with half hour to hour MAX lunches.

No one complains, people just burnout and move on in 1-2-3 years. No one gets overtime. It would be nice to have an open discussion with HR or whoever but its a sensitive topic, in this culture you are not "allowed" to tell how much you actually work...

I guess its a "dirty little secret" but 9-5 does not exist anymore (honestly probably never did, at least in my field).

untog 4 hours ago 6 replies      
Hardly surprising, and probably positive for many Spaniards, but it still makes me a little sad that every country is converging on a very similar way of working.

Globalisation makes it inevitable, of course, but does Monday-Friday, 9-5 really make sense? Why not 8-6 Monday-Thursday with a three day weekend? We developers are often lucky that we can dictate our own hours and experiment with this, but I'd be fascinated to see an entire nation adapt to it and see the effects.

logronoide 49 minutes ago 0 replies      
Spaniard here; this is absolutely not true. 3 hour lunch breaks it's a myth. May be shops in small towns and villages.

One hour break for lunch is the norm. May be two, but it's not very common.

There is an exception here: in summertime, a lot of companies reduce the working hours from 8 to 7 hours. They start at 8am and leave at 3pm. The sum of these hours must be distributed along the rest of the days of the year. So you work 7 hours in summer, and 8 and half the rest of the year. This summertime schedule comes from the days when Air conditioning systems did not exist, and literally working in Spain in the afternoon is impossible.

It's really funny to read about Spanish topics from a poorly informed journalist...

menor 16 minutes ago 0 replies      
Another spaniard here, reading the article you get the impression that Spain is some kind of dictatorial country, where the government dictates which times you are allowed to work. It is not, companies have freedom to choose, unless you work for the public administration (where I lived, they work 8 to 15).

BTW I live in Germany now and still do a daily 30 mins siesta after lunch.

scruti 4 hours ago 2 replies      
I lost the count of times that I had to explain here in UK than we Spaniards don't go home to take a nap every day at lunch time.

Nowadays this 3 hours break mainly affect the small shops instead of offices/big companies.

And, being honest, was quite useful since shops closing at 8 pm allow people doing their shopping chores when leaving the office/school/uni...

Edit: My English... U_U'

maxxxxx 5 hours ago 1 reply      
This makes sense. In the village I grew up at a lot of people walked home over lunch and came back to work two hours later. This was great but with commutes it doesn't make sense.
dispo001 4 hours ago 0 replies      
It depends on the type of work but in general it seems the most fun to have more flexibility.

In stead of starting at 6:00 one could start between 5:00 and 7:00. Then have a break between 11:00 and 13:00 for 30 min or 3 hours, whatever the fuck you want.

This is of course assuming we would allow people to enjoy life (An Utopian idea most people would fight against)

scalesolved 1 hour ago 0 replies      
I've worked for two different tech companies in Barcelona, neither had lunch breaks longer than an hour.

When I moved from the UK I was shocked that 9-5 was not the norm, in both places the schedule was more 9-7 with the same amount of lunch time as in the UK and far more pressure to stay on later than 7.

spaniard_dev 48 minutes ago 0 replies      
This article is full of bullshit, prejudgements and false facts. There is probably one nap bar in the whole Spain and he had to add it to the article.
santialbo 4 hours ago 2 replies      
Just a note: These working hours happen just in retail.
geodel 3 hours ago 0 replies      
I have seen same thing in India. In small towns most of the markets will close down for few hrs in afternoon. In summers to avoid heat and in winter to sit in sun. Again with globalization I think this is going away in India too.
rodolphoarruda 1 hour ago 0 replies      
I had a concall 2 days ago with a Spaniard from Madrid. They are leaving the office by 3pm now in summer time. That means a lot of daylight time to do whatever else you want.
duxup 3 hours ago 1 reply      
With lunch breaks like that... what do parents do? Go see their kids, because if not when do they see them on workdays?
gadders 3 hours ago 1 reply      
I remember going to work in the small Madrid office of a private large bank in 20 or so years ago.

Being keen, I turned up at 8.30am and then waited for an hour for someone to arrive. The office was then closed and everyone turfed out from 12pm-2pm, but the working day didn't finish until around 7pm.

erikb 1 hour ago 0 replies      
Bitcoin Transaction Malleability eklitzke.org
59 points by eklitzke  4 hours ago   13 comments top 4
Uptrenda 11 minutes ago 0 replies      
This is a very complicated way to explain TX malluability. It'd say that the problem is that signatures only sign a portion of the transaction and the resulting TXID that is used in the blockchain is based on hashing the entire transaction.

So the signature can be mutated as the author suggests, but the signature doesn't sign the entire section of the transaction anyway (where data is provided to the redeemScript to satisfy its conditions. This section called the scriptSig includes the sig which cannot sign itself.)

So with the scriptSig, anyone is free to add whatever new data they like to this section which gets added to the input stack, and as long as you leave the stack the same way as you found it you can insert any arbitrary junk as you like and it will change the resulting TXID as seen in the blockchain without invalidating the transaction.

This is a bad thing for "smart contracts" on Bitcoin because many contracts depend on making chains of unconfirmed, future transactions, based on hashing the entire transaction to compute its TXID (as Clarkmoody suggests.) An example of this is a cross-chain contract where you might want to send funds to a partially shared address between a stranger and yourself, and you need a way to setup a time-locked refund in case the protocol doesn't succeed (no longer necessary due to OP_CHECKLOCKTIMEVERIFY but its an example.)

To do refunds in this way you would need to be able to sign chains of unconfirmed transactions without previous transaction IDs being changed from transaction malleability. Bitcoin does include a fix for this called "segregated witness" but the fix has been controversial. I don't keep up to date with the "scaling progress" now but I doubt it has been merged yet.

clarkmoody 3 hours ago 1 reply      
> These txids are immaterial to how the Bitcoin blockchain works: their primary use is as a convenience for humans when referring to transactions.

This is incorrect. Each Bitcoin transaction input references a previous transaction output as the txid+output index. Transactions spending unconfirmed outputs are orphaned when the parent is malleated and confirmed.

Also, as a data hash with no checksum, txids are not convenient for humans at all.

> Transaction malleability is already more or less fixed in Bitcoin

A couple months ago, there was a significant malleability attack on the Testnet, in which nearly every transaction was malleated as it was included in a block.

f9beb4d9 2 hours ago 1 reply      
> However, OpenSSL did not do strict validation of the ASN.1 data by default

The more interesting problem was that this was non deterministic, you could encode fields with 64bit integers and they would bomb out on 32bit systems. ASN1 is also mind bogglingly complex, you can encode to arbitrary depths completely nonsensical things like negative numbers and strings, containers of multiple elements, none of the implementations manage to decode blocks the same or adhere to the same limits.

dfox 3 hours ago 1 reply      
One thing that strikes me as weird is the reference to ASN.1, I always thought that bitcoin only uses DER encoding for the signatures themselves (because that is what is usual for ECDSA, even thought it is suboptimal for multiple reasons) and the rest of the protocol including transaction format is specified in terms of bytes and varints. Have I missed something?
The Company Isn't a Family signalvnoise.com
84 points by milesf  1 hour ago   28 comments top 7
alberth 56 minutes ago 8 replies      
Reid Hastings (Netflix founder) uses a different analogy.

He compares a company as a high performing "sports team". With the idea being, you're working towards a collective goal. You each have you're own role. But if you're not performing, you get cut. Just like any athlete would get cut.

The problem with the "family" analogy is it implies "unconditional love" regardless if you're messing up, failing, not performing.

grandalf 12 minutes ago 0 replies      
I look at it like a company is a group of predator birds who happen all to be flying together in formation for a while.

Individuals temporarily have self-interest that aligns beautifully and lends itself to pattern flying. Individual incentives and goals will inevitably change over time.

Maybe one person will veer off in a different direction if the startup doesn't get A-list funding, maybe another will get bored after the prototype is finished, maybe another will fly off in chase of a relationship, etc.

Even large companies are like this. The CEO may be putting 150% into it for two years but will rationally give up and move on if things don't turn around. Someone may have tried hard to get hired right out of school, but after two years of experience fully plans on quitting to attend a PhD program.

I like to be very upfront with would-be hires about their goals. I don't expect someone to fly with the pack out of loyalty, there has to be something about their overall life plan that makes flying in this pattern a big win for some period of time.

I do want to talk about that life plan. How can time with my company help you get where you want to get and also help us get where we want to get? My goal of course is to make the experience of working there so incredible that you change your life plan to allow more time flying in our formation.

Some people don't have kids or spouses and would rather be the best indoor soccer player they can be or the best at spoken word night at the local coffee shop. When someone is at work I want them to be focused on goals that matter to them and that are meaningful to the company, but I also want them to switch gears and live their life whatever it is, since I know that they will do that anyway if they have any sort of independent, inner drive.

unsignedint 9 minutes ago 1 reply      
It's not that I have problem with this notion itself personally, but when it is used with a subtext of excuse for the problems, that's where I have issues.

e.g.The company is going through great hardship, so we will have to mistreat you. You are part of our "family" so you understand and put up with that, right?

I've been in that situation many years ago and was frustrating.

corybrown 1 hour ago 3 replies      
> Because by invoking the image of the family, the valor of doing whatever it takes naturally follows. Youre not just working long nights or skipping vacation to further the bottom line, no, no, youre doing this for the family.

Same trick used by college fraternities to do all sorts of crap.

Johnie 29 minutes ago 0 replies      
This post is very simplistic. A company is defined by its culture. There are many different models for building the culture of a company.

Different cultures prioritize different things. For example, Netflix prioritizes the individual Star culture. Zappos prioritizes the commitment culture.

So whether a company is a family or a team or business transaction depends on the founder's imprint.

A good read is: https://cmr.berkeley.edu/documents/sample_articles/2002_44_3...

ShabbosGoy 22 minutes ago 2 replies      
I would argue that the first paragraph is a critique of capitalism itself.

The implicit assumption of capitalism is that you have a perpetual debt that cannot be repaid, much like the Christian "debt" that cannot be repaid to the Abrahamic God.

Capitalism is as much an economic system as it is a moralistic and religious system.

zucchini_head 25 minutes ago 0 replies      
About the article specifically, it's rather insubstantial and doesn't provide any more insight into company dynamics other than heartfelt paragraphs. Still a healthy reminder though.

But anyway, about the subject matter, from the several tech companies I've been at now I can see that this kind of "exec brainwashing" does happen, and it always seems rather on-the-nose in it's indifference and facelessness. Where I work currently we even have paragraphs like these on our toilet doors! The thing is, people work better when it's "for" something. Something bigger than themselves, such as the "family" or "team" (it's probably something to do with our hunter-gatherer evolutionary genes). Execs, or rather HR and "Worker Performance Consultants" know this, and they (ab)use it to make workers produce more wealth for the company.

I think however that both companies and workers are to blame for past-shift hours, pressures to finish, and such, and both maybe partially for the same reason - a race to the bottom. In terms of companies, this can be for example when company X pushes their workers harder than company Y to undercut their prices. You definitely see this in things like ~[UK reference warning]~ Sports Direct International Ltd, which treats workers rather poorly [1][2][3], just to make their shoes a bit cheaper than that fancy hipster shop down the road who's staff work only their shift hours. In terms of workers, you see the exact same thing, where worker X will over-propose on project A to undercut Worker Y's realistically-proposed project A. So you see that if you don't work crazy hours to finish that big project, then you can definitely bet on somebody else doing it.

In the end it's not always "the evil company and their grubby execs" who are asking too much of their workers to get that good quarter result, but workers themselves asking too much of themselves to get that freelance project, job, promotion, raise, good reference, or whatever, over their job-market or worker peers.

I think maybe the little post misses a few parts of the picture, but like I said, a good reminder of the topic.

[1] https://www.theguardian.com/commentisfree/2016/jun/08/inhuma...[2] https://www.theguardian.com/business/2015/dec/09/how-sports-...[3] http://www.bbc.co.uk/news/uk-england-derbyshire-36855374

Announcing Rust 1.19 rust-lang.org
132 points by steveklabnik  3 hours ago   21 comments top 4
jmull 1 hour ago 5 replies      
Hm. I'm not sure about the addition of unions. Why add something that is unsafe to read or write? You need an additional mechanism to let you know which type it is OK to access.

They mention the case where the type can be distinguished by the lest significant bit, but wouldn't it be better to handle that case as an enum? That is, the least significant bits define the enum tag, while the remaining bits define the associated value.

(By the way, I really mean this as a straight question, not a criticism in the form of a rhetorical question. I really don't know enough about it to be criticizing it.)

Cieplak 1 hour ago 1 reply      
Great running into you at Shizen @steveklabnik!

Just started using Rust in a serious capacity this month to secure some C++ functions that are called by our Erlang apps, with great assistance from Rustler [1]. Several people have complained to me about the decision to remove async IO from Rust, but I'm really grateful that it happened, because it lets Rust focus on being the best at what it is. Erlang's concurrency primitives and Rust's performance & security are a match made in heaven.

[1] https://github.com/hansihe/rustler

JoshTriplett 2 hours ago 1 reply      
Incredibly excited to see unions available in stable Rust now!

The release notes mention my RFC, but a huge thanks also to Vadim Petrochenkov for the implementation, and all the myriad RFC contributors.

ofek 51 minutes ago 2 replies      
Wow, a break yielding a value from within a loop is awesome! Do any other langs have that?
Hidden dungeons of the London Underground michalpaszkiewicz.co.uk
115 points by superqwert  6 hours ago   21 comments top 5
Peroni 5 hours ago 2 replies      
The BBC (unsurprisingly) have a fascinating documentary that goes behind the scenes during the construction of a new tube line - http://www.bbc.co.uk/programmes/b04b7h1w

It shows the unbelievable precision required to build a new tunnel given the amount of existing tube lines and other, often unexpected, underground structures. Well worth watching.

arethuza 5 hours ago 1 reply      
If you like this article you might like Subterranea Britannica:


zegl 6 hours ago 1 reply      
What is up with the blur effect when loading this site? It makes me dizzy.
barnaclejive 4 hours ago 1 reply      
Why are the photos so dark, small, and black and white?
dsfyu404ed 1 hour ago 0 replies      
>A number of the rooms included some fans - these were monstrously huge metal tubes that hold fans inside that can be more powerful than airline jet engines

Are we talking CFM or specific impulse?

AlphaBay, the Largest Online 'Dark Market,' Shut Down justice.gov
5 points by pero  33 minutes ago   1 comment top
rmwaite 4 minutes ago 0 replies      
Don't these people know that they will never kill this. Just like Napster's death, and oink, and what, and so on. Every time they shut one of these down the next one will be harder to shut down.
ArangoDB 3.2 GA RocksDB, Pregel, Fault-Tolerant Foxx and Satellite Collections arangodb.com
108 points by bjerun  6 hours ago   27 comments top 10
DeShadow 3 hours ago 0 replies      
Very good news! :) RocksDB engine is a very big step of ArangoDB growth!I use ArangoDB in some projects and it's absolutely fantastic: very powerful AQL queries, very-very fast, good optimizations.

I checked alpha & beta versions of 3.2 and the improvements are really amazing :)

lmeyerov 3 hours ago 1 reply      
The graph layers look wonderful, I'm happy to see they have progressed so far!

The combination of Arango's new capabilities with Graphistry should be interesting. So, if Arango graph users are interested in trying GPU visual graph analytics for looking at more data at a time, our team is happy to share access to http://github.com/graphistry/pygraphistry . Likewise, investigation teams digging daily into event & entity data (security, fraud, patient journey, ...), we are piloting our visual playbook automation & interactive investigation layer with medium & large enterprises, and we'd love to chat about that as well. Either way, ping info@graphistry -- this sounds like a great match.

solisoft 5 hours ago 2 replies      
Congratulations to the ArangoDB team ! With ArangoDB you can create powerful applications using a multi models database and an API builder called Foxx.I'm building all my apps now within ArangoDB/Foxx no more needs of server apps.
jacobferrero 52 minutes ago 0 replies      
I use ArangoDB for all my projects, big, medium, and small. Happy to see great new stuff.
calrain 5 hours ago 1 reply      
Great news, I've been using ArangoDB for over a year and this upgrade is great, ArangoDB is shaping up to be a 'Giant Killer'.

The new automated deployment options for Foxx as well as the better memory management with RocksDB are highlights for me.

allandubey 5 hours ago 0 replies      
Awesome news! Go Arango team
steiner_j 5 hours ago 1 reply      
Congrats on shipping! I wonder if RocksDB could be the right choice for small to midscale IoT scenarios? Anyone has an opinion on that?
maxpert 4 hours ago 1 reply      
Would like to hear who used it in production and what type of scale that DB is handling.
princetman 4 hours ago 3 replies      
I wonder if they improved Cluster setup with this release. If you tried without DC/OS, it was painful experience. Arangodb Starter was definitely a step in right direction.
MauroJunior 3 hours ago 0 replies      
Amazing, this release has many features that I was waiting for, I already switch my project from MongoDB to ArangoDB, but I still missing realtime features, or a least a tailable cursor like in MongoDB... But even so, it's worth it!
How I Tricked Symantec with a Fake Private Key hboeck.de
39 points by hannob  3 hours ago   3 comments top
watbe 42 minutes ago 2 replies      
Symantec is surely testing the patience of Google/Mozilla now. Illegitimate revocation seems almost on the same level as illegitimate issuance of certificates. Imagine the impact on an HPKP site.
Show HN: Elixir Tab Little bit of Elixir in every new tab github.com
22 points by efexen  3 hours ago   5 comments top 3
swsieber 59 minutes ago 0 replies      
Does anybody know if there's a generalized version of this? I'd love some way to define custom things to display on new pages. If not, I think I'll have to go fork this and implement it :)

Edit: Oh, you could integrate this with Anki to good effect I would imagine. Hmm....

krat0sprakhar 58 minutes ago 1 reply      
Sounds like an interesting idea but I think both the Github page and the Chrome extension page could benefit from more examples of what the extension shows.
xutopia 1 hour ago 0 replies      
I wished there were ways to propose real examples and vote up the ones you prefer. Some of those are really hard to understand in context.

Great idea on the extension though!

A coherent story of Stonehenge may be beginning to emerge bbc.com
23 points by ohjeez  1 hour ago   8 comments top 5
komali2 40 minutes ago 0 replies      
Well, that's a clickbait title if I ever saw one, but it is a good article. It's basically a summary of everything we know up to now about Stonehenge. If you haven't read about Stonehenge in a couple years, there's some good recent work done that's worth catching up on.

From the article itself:

>But a coherent story may be beginning to emerge.That has been particularly true over the last decade.

Basically "here's the new stuff we've got on Stonehenge."

JoshMnem 12 minutes ago 0 replies      
Anyone interested in Stonehenge should also check out this book: http://www.lynnekelly.com.au/the-memory-code/

It goes into some new ideas about what it was used for.

fluxby 41 minutes ago 1 reply      
Nothing was cracked really.Still no explanation of who built it and how it was built.

The fact that it was a religious area for many centuries has been known for a long time.

Same can be said about pyramids in Mexico and great pyramid in Egypt, as well as many other sites like Pumapunku, etc.

Ancient mega structures are one of the biggest mysteries of our world.

ajarmst 9 minutes ago 0 replies      
"We found that there were some other structures near here and it's a bit older than we originally thought. Also, some of the rock came from quite far away. We still have a lot of questions." isn't really what comes to mind when the title claims that a mystery has been cracked.
apokryptein 1 hour ago 0 replies      
Interesting read -- thanks.
Underground Hansa Market taken over and shut down politie.nl
155 points by ukkie  3 hours ago   165 comments top 13
loeg 3 hours ago 1 reply      
I like how they try and color the site by associating it with sites that sell weapons and child porn.

> It is Hansa Market, currently the most popular dark market in the anonymous part of the internet, the so-called darknet.

> ...

> The darknet markets enable large-scale trading in chiefly illegal goods, such as drugs, weapons, child pornography, and ransom software. ... No weapons or child pornography were sold on Hansa Market.

captainmuon 2 hours ago 2 replies      
I would love to know how they got caught. There is probably something to learn about opsec from this.

The paranoid side of me says Tor is unsafe (due to whatever - the authorities having backdoors on most hosting services, on your PC, due to the encryption being cracked by some unknown breakthrough or due to 90% of entry nodes being controlled by NSA). And NSA and FBI and co just have hundreds of people working on "parallel construction" of evidence.

Of course, a blunder by the operators is much more likely...

5_minutes 3 hours ago 6 replies      
The clever thing is that they took it over and for one month monitored it without the users knowing.

Everyone on obfuscating their persona with Tor and bitcoins, still had to enter their postal addresses on the website, to receive the goodies.

That database must be a wet dream for law enforcement.

delegate 3 hours ago 3 replies      
> Some 10,000 foreign addresses of Hansa Market buyers were passed on to Europol.

That's really bad, because not all countries are Holland, which is famous for its 'relaxed' attitude towards drugs in general, sosome users might have their lives turned around by this - followed, arrested, jailed, extorted, etc.

jorrizza 3 hours ago 2 replies      
Services like these give Tor and related projects a bad name. Taking down illegal market places is part of the police's job and this is definitely a success story. The article is lacking in details a bit, but it seems they've taken it down using regular old police work as they mention an "undercover operation". This proves once more that the weakening of security for everybody is not needed to catch criminals.
abrkn 2 hours ago 1 reply      
It's just a matter of time until the store fronts are developed open source on Github and hosted decentralized on a network like MaidSafe. People love to get high and are willing to pay. Innovation will follow to meet demand.
bahjoite 3 hours ago 2 replies      
I visited Hansa a few days ago and the first thing I noticed was a banner that read something like:-

"New registrations are disabled because of high demand caused by the exodus from AlphaBay"

peterwwillis 1 hour ago 0 replies      
The darknet they took down is approximately 2,850 times smaller than Europe's cocaine market, or 12,000 times smaller than the general drug market. The Netherlands is also the main drug production and trafficking route to Europe.

So, basically, they took down a minor competitor to the bigger drug trafficking businesses.

DiscoKing 1 hour ago 1 reply      
How many lessons do people need before they learn? Prohibition doesn't work. It creates more violence than it solves.
tomjen3 3 hours ago 2 replies      
>This was made possible by the arrest of the two administrators of Hansa Market in Germany, aged 30 and 31. Since their arrest, the two men, from Siegen, NorthRhine-Westphalia, have been kept in pre-trial detention, and are only allowed to have contact with their lawyers.

So admins of the other markets: always have a dead-mans switch.

ryanlol 2 hours ago 2 replies      
Dream Market is still up since Nov 2013. (1 year older than even AlphaBay) Perhaps they're the ones that are actually based in Russia ;P

You seriously have to be a moron to host your DNM in Canada or NL though, NL LE especially has many years of experience and millions worth of equipment for these investigations. Pick a place where they're less likely to have advanced DMA equipment handy! Even better though, is to choose a place where the FBI won't be able to fly a team with advanced DMA equipment.

In fact, I wouldn't be too shocked if the rather sophisticated wiretap gear the Dutch police have at AMS-IX was capable of identifying hidden sites with timing attacks.

Supposedly this was posted on Hansa forums by the staff in the middle of the takedown http://i.imgur.com/yowD1Vr.png

TausAmmer 2 hours ago 0 replies      
Thugs will be a thugs.This will only drive innovation.
RutZap 2 hours ago 3 replies      
I think this is brilliant. A very efficient way of policing and removing a lot of drugs from the market without spending a lot of public money and wasting time on the streets.

Also the darknet is great as it reduces the violence associated with drug crime, by taking the drugs off the streets and into the legitimate courier business. You have to love technology sometimes.

How the Web Became Unreadable wired.com
77 points by rbanffy  2 hours ago   46 comments top 17
geebee 2 hours ago 3 replies      
Do you think this may be related to the rise of javascript and heavy front end, client-side frameworks?

To be clear, this is something I'm mulling over, not a hard conclusion, but here it goes...

Creating a simple, server rendered, text based web site may be exactly what the users need, but from a developer's point of view, it can also be a career risk. A rapidly increasing percentage of development jobs out there now require experience with a front-end framework. As a result, developers need this experience, so they start adding front end frameworks in order to 1) gain that experience, and 2) document that experience in a real world project. Remember, a project isn't just a project, it's an audition for your next project. This is how tech recruiting works[1]. So developers may be adding new bells and whistles that are not only unnecessary[2], but actively harmful to their current project, largely because they are necessary to their resume, and it's actively harmful not to have them there.

In short, one of the reasons the web is becoming unreadable is that web developers can't create the resume they'll need for their next gig by using the best tech for their current one.

[1] I'm getting long in the tooth enough to have seen this a few times. Companies required "EJB", it wasn't enough to have experience with Tomcat and standard Java. Later, Spring, Struts, Hibernate, iBatis were required. These days, it's ember, react, and other rapidly evolving JS frameworks.

[2] I want to be clear that I think many of these frameworks are actually pretty excellent, for projects that need them. The problem is, if your current project doesn't, you still have a strong incentive to bring them in.

hacker_9 2 hours ago 3 replies      
Can't tell if this piece is genuine or not; ads taking up most of the space on the screen to the left and right; ads every 5 paragraphs; a video ad; constant moving of the article text as ads load.. ads popping in and out as I scroll... at least it's a great showcase of an unreadable article.
rkangel 2 hours ago 3 replies      
I have a theory as to one of the leading causes of this blight - screens on Macbooks show text in a way that gives more contrast than screens on most devices. Other devices vary obviously, but Apple tends to be quite far at one end of the spectrum (to my eye).

Designers obviously love their Macs, and while using them produce a design that is at the borderline of what's ok on their screen and then is unusable on lots of others.

I have a big, bright, high contrast monitor and good eyesight (in theory the best case), but it's connected to a Windows PC and various websites render as hard to read. At least I'm savvy and so can use StyleBot to fix the worst offenders!

toddmorey 2 hours ago 1 reply      
Inside scoop from a creative director: Designers aren't toning down contrast to reduce eyestrain or to avoid black. It's an attempt (really a hack) to simplify a busy page / UI. They are telling you it's too much content on the screen at one time.

If you aren't empowered to trim the content, you can at least tone the type down in contrast & size to make it appear to be less. Every time I see subtle grey text on a marketing page, I can hear the designer thinking, "no one is really going to read this anyway."

Just like in speaking, my advice is to make your words bold but few.

brynedwards 44 minutes ago 0 replies      
For those complaining about the appearance of the article: it looks like the original link [1] now redirects to the wired.com version but used to redirect to medium.com [2], at least that's what happens when I enter it into archive.org. Also, previous discussion [3].

1. https://backchannel.com/how-the-web-became-unreadable-a781dd...

2. https://web.archive.org/web/20161019173808/https://backchann...

3. https://news.ycombinator.com/item?id=12743628

iokevins 2 hours ago 1 reply      

"My plea to designers and software engineers: Ignore the fads and go back to the typographic principles of print keep your type black, and vary weight and font instead of grayness. Youll be making things better for people who read on smaller, dimmer screens, even if their eyes arent aging like mine. It may not be trendy, but its time to consider who is being left out by the webs aesthetic."

Dirlewanger 2 hours ago 0 replies      
Unless they practice what they preach, anything from Wired (or any modern bloated "news" site for that matter) whining about this can be discarded. I don't even know why they publish this hypocritical crap.
kbuchanan 2 hours ago 1 reply      
The first thing that appeared upon clicking the link, for me, was a full-screen overlay ad asking me to subscribe to Backchannel. Unreadable indeed.
Nomentatus 2 hours ago 1 reply      
"changed its text from legible to illegible. Text that was once crisp and dark was suddenly lightened to a pallid gray." - and Wired published it in... grey!
randcraw 1 hour ago 0 replies      
Brightness contrast is important, but so is color contrast. As readers of early WiReD magazine (ca. 1998) well know, reading orange text on a yellow background is just as painful as light gray on medium gray.

The same goes for font size. Due to the range of device sizes and display dot densities, small fonts can easily become unreadable. Mobile versions of web sites helps only so much. I find at least half of Apple's system menus to be unreadably small on my iMac 5K monitor and my Macbook retina. This illegibility needs to be user-fixable in a systematic way, and ASAP. Just telling the user, "Use an Accessibility service", is NOT the right response.

xigency 32 minutes ago 0 replies      
Given the comments, it seems like most readers don't see the entire article as a single vertical column of characters.
aphextron 1 hour ago 0 replies      
The irony of this article is incredible
n0us 2 hours ago 1 reply      
> Text that was once crisp and dark was suddenly lightened to a pallid gray.

Then the author puts quoted highlights in a pallid gray through the article. I guess this could be intentionally ironic but I sorta doubt it.

creeble 1 hour ago 0 replies      
Wired'd print magazine is easily the least-readable mag on the planet.

Among other poor style choices, they abandoned text borders some years back - text blocks are within tiny fractions of character width from high-contrast (often just black) page features.

I guess it's what they call "edgy", literally. So hard to read!

But I may be the last remaining subscriber anyway.

jaclaz 1 hour ago 0 replies      
Just in case:


Maybe a bit "extreme", an "artistic site" has all the rights in the world (+1) to be "artistic", but maybe it has been a tadbit overdone.

bogomipz 2 hours ago 0 replies      
The article has nearly half the screen real-estate taken up by a social media "Share" bar, a "Most Popular stories bar and a top nav bar. Jammed between these is actual content.

The web has become unreadable indeed Wired, I'm not sure if contrast ratios and typography are the first things that come to most people's minds however.

The Rise of Python for Embedded Systems zerynth.com
102 points by lfcerf  7 hours ago   71 comments top 14
nikofeyn 4 hours ago 8 replies      

i just can't understand why someone would want python for embedded applications. i primarily use labview for systems development, and there is not a single thing that would be gained by moving to python while many things would be lost. and no one can convince me that using python over labview would generate more robust code. i have used python before, and it was slower to develop in and more error prone than labview for system development.

personally, i want to see more lisps/schemes and ML-based languages on embedded systems. i know there are some, but they aren't quite there. that's the direction i think we should move. python isn't the best at any one thing other than having lots of libraries. there are far more productive dynamic languages (lisps/schemes) and far better languages for developing safe code (ML-based). i am currently learning idris, which seems like it could be amazing for embedded code development. having provable state machines would be a huge plus.

turbinerneiter 1 hour ago 1 reply      
I'll just drop this here:[Evaluation of MicroPython as Application Layer Programming Language on CubeSats](http://ieeexplore.ieee.org/document/7948548/)

I think Python has _everything_ a language needs to be the future of Embedded Systems - except a compiler.

You start writing you program using MicroPython as interpreter. You add type hints because it's the right thing to do anyway. Once your prototype is done, you compile to get the speed (cutting execution time, increasing sleep time, increasing battery live).

My background is Aerospace Engineering and maybe that's why I can't understand why nobody does a real, AOT, Python compiler. There is Nuitka, but it's a transpiler that extensively uses libpython and isn't fit for microcontrollers (still awesome project, tough). Cython also uses libpython and a different type-annotation system.

chipsandkip 2 hours ago 3 replies      
There's nothing wrong with using Python on an embedded platform in certain circumstances, but this article is a poorly written advertisement.

> "Expert and skilled C programmers could justify this stating that C generates a faster, more compact, and more reliable code BUT if you replace C with assembly in that statement, youd have exactly what an extinct generation of programmers said about 20 years ago!"

Yes, the differences between C and Assembly are directly comparable to the differences between Python and C.

> "Theres an enormous crowd of professionals skilled in using Python potentially able to develop the software for the next big thing in IoT and to ship new amazing embedded applications in a short time."

Python doesn't even appear on the IEEE rankings shown at the start of the article, It's amazing how this "enormous crowd" can stay so silent.

Are there any examples of products with an MCU running Python being manufactured at scale? Even if Python provides benefits for rapid prototyping, I'm really struggling to think of a business case where you wouldn't replace it with C or C++ before going into production.

ashwin67 4 hours ago 2 replies      
Each has its own place. I write code in all languages including Python and C. Yet, I have to write code in assembly even today when the particular problem statement requires its usage. Such generalization can only be expected from a marketing campaign that this post really is.
kronos29296 5 hours ago 2 replies      
Couldn't something like micro python and Cython replace Zerynth? Reads like the marketing post that it is. Starts off with interesting stuff and ends with promo. It also looks very proprietary to me. Just add more FOSS and I will definitely be interested.
infocollector 5 hours ago 3 replies      
I am using MicroPython on ESP8266 right now. What are the major differences between Zerynth and MicroPython, and why do they exist?
syntaxing 4 hours ago 6 replies      
Is there a benefit on using Python in terms of performance rather than C/++ for embedded systems? Does Python work well for lower level applications (in terms of hardware access)?
afeezaziz 5 hours ago 3 replies      
I am trying to read more but the website went down. My question: I have been dabbling with mbed for quite some time and I would love to use Python for uC. The main problems(at least the perception) are that: you cannot do programming for low power and for real time functions, python will not work. Are these issues addressed by zerynth?
th0ma5 4 hours ago 0 replies      
What about memory fragmentation? I have had a lot of fun with MicroPython on an ESP8266, and, while I was thinking about my problem all wrong, I did bump up against reboots due to exhaustion of the address space (I think)...
rb808 3 hours ago 2 replies      
Do CS grads even get taught C any more? I get the feeling that the newest generation is all Java/Python/JS.

Given that cheap embedded systems are so powerful now it makes sense.

linopolus 3 hours ago 0 replies      
So the developers of Zerynth say Zerynth is great. Nice try...I say, Wrong Tool for the Job. Use C.
bobjordan 4 hours ago 0 replies      
pjmlp 5 hours ago 1 reply      
There are even companies writing drivers in Python.


Search for "via a driver written in Python!".

NIST Random Beacon nist.gov
108 points by relyio  12 hours ago   50 comments top 16
URSpider94 8 hours ago 5 replies      
There are two clear uses for this, and one anti-use.

First, in scientific coding, one of the big challenges is in reproducing the results of a program that uses random numbers. A classic solution is to use a deterministic pseudo-random number generator that can be seeded, such that if it's seeded with the same number on two different runs, it will always generate the same output. This could be a great replacement for that, since you could write a rand() routine that accepts a start point in the chain and traverses forward to output random values on demand.

Second, you could use this as a source of future randomness -- for example, I will award you $x if the next eight bits out of the random generator represent 0-127, and will award the $x to me if they represent 128 or greater. We can both check the value, and we don't have to trust each other.

The caveat with the last example is that we both have to trust that NIST has not been compromised ...

The anti-use would be in any sort of cryptographic implementation, since any "entropy" you'd be gaining by using this data as a source of randomness is completely counteracted by the fact that the source is known. Randomness becomes deterministic once the source of the randomness is disclosed and broadcast ...

phkahler 4 hours ago 0 replies      
A clock for transactions. Since this produces verifiable data every minute, you can use that data as to sign any transaction and be able to verify that the transaction did not occur prior to the publication. However you can not verify that the signed data was not modified or even created after the publication of the data. Any thoughts on how this concept can be useful then?
kanzure 3 hours ago 2 replies      

but people will do it anyway, the level of insanity out there is really high https://www.reddit.com/r/btc/comments/68pusp/gavin_andresen_... or even https://arstechnica.com/security/2015/05/crypto-flaws-in-blo...

also why would it be a good idea to use a centralized source of "entropy"....? why is NIST involved at all?

azinman2 11 hours ago 5 replies      

Can this at least be used to seed a CSPRNG at boot / device install, ideally with a mix of other entropy available? Is the problem that theyre shared for everyone at some given time?

0xabe 2 hours ago 0 replies      
Compare to the Dice-O-Matic [0] just recently featured here.

[0] https://news.ycombinator.com/item?id=14806986

masto 7 hours ago 0 replies      
My hero, John Walker, built this thing in 1996: https://www.fourmilab.ch/hotbits/
friendzis 4 hours ago 1 reply      
If I read the announcement correctly, the hash is computed on the last value and is distributed separately. Either I miss something or they can change ANY value in the chain and the last hash would still check out.

It is only viable to use this while constantly querying /last. If network/power/etc. outage lasts more than 60 secs, chains should be considered separate.

Or maybe I'm missing something obvious

abrookewood 11 hours ago 2 replies      
How susceptible would this be to government influence?

NIST have a long history of positive work in the security field, so they have that in their favour. They also publish the "hash of the previous value to chain the sequence of values together", so presumably someone could record all of the values and test them for randomness ... but still, if this became the default source for random number generators in operating systems everywhere, it would present a very attractive target.

johnhenry 4 hours ago 0 replies      
bstamour 4 hours ago 0 replies      
For those interested, here's a Haskell interface to the Beacon I put together some time ago [1].

[1] https://github.com/bstamour/haskell-nist-beacon

zeroflow 9 hours ago 1 reply      
What would be the typical usecase for this beacon?

The first usecase I could think of would be a warning canary that signs the data of the beacon akin to the plot in movies where someone holds up today's newspaper.

The 2nd would as said on the page as kind of "nothing up my sleeves" random number generator

wyldfire 6 hours ago 1 reply      
Gambling seems like an interesting use case. Having a public seed and a public algorithm would mean that results should be verifiable but not predictable.

I suppose it would be interesting to consider how latency of this public resource represents a risk and how it could be mitigated.

tsujamin 9 hours ago 0 replies      
I remember using ANU's one as the random source for a first year CS assignment once haha


i_have_to_speak 8 hours ago 0 replies      
Hmm, what if a government agency with warrant / hacker can force a specific consumer of this service to be fed with known "random" values?
rini17 10 hours ago 2 replies      
/dev/random seeding on bootup from NIST will be implemented in systemd in 3..2..1..
TheDreamBotcher 8 hours ago 0 replies      
I think NIST implemented the random beacon when they announced the WTC towers report?


Introducing Bluetooth Mesh Networking bluetooth.com
94 points by tdrnd  12 hours ago   42 comments top 10
crispyambulance 3 hours ago 0 replies      
I don't know if bluetooth mesh will have the same problem, but zigbee as implemented by Samsung Smartthings leaves a lot to be desired.

For one thing, there is no obvious way to inspect how "the mesh" is configuring itself. I can't observe whether or not repeaters are working as intended nor anything about the quality of the signal strength. The only way to make changes to what gets connected to what is by turning everything off and then turning it back on in a controlled sequence.

I hope that bluetooth mesh vendors will provide tools for troubleshooting and actually inspecting network. Networks can be squirrely things, I don't like taking a shot in the dark with this stuff.

LyalinDotCom 5 hours ago 2 replies      
I just want my wireless BT headphones to work right across multiple devices, can they fix that too?
joelthelion 7 hours ago 4 replies      
How about introducing reliable bluetooth instead?
tenryuu 5 hours ago 0 replies      
With mesh network being shipped, I hope there is now time to develop an improvement to how bluetooth can be implemented into applications to allow better UX in handling connections/handshakes. It's fine when you only use one device, but when you start using two, man it's not user friendly
reaperducer 3 hours ago 3 replies      
Why "introducing?" There are already products on the market that have this. Most of the lightbulbs in my house operate on a Bluetooth mesh network. It's slower than the wifi-connected bulbs, but the range is much greater since each bulb only needs to be able to see the next-nearest bulb. Otherwise I wouldn't be able to reach the lights out on the driveway.
joombaga 6 hours ago 2 replies      
This is a standard press release. Can someone tell me what this technology actually is/does? Is it a bluetooth profile? A new protocol? At what layer? Is it a new spec/new optional/addition to the spec? Does it require new hardware?
guidefreitas 6 hours ago 4 replies      
And I still can't use my Bluetooth mouse and headset at same time reliably on the MacBook pro. How can I trust that?
iampims 2 hours ago 1 reply      
I found the FAQ to be a good introduction document to the new mesh networking feature.


VikingCoder 15 minutes ago 0 replies      
So how will me and my friends take advantage of this?

Will we have to wait for new hardware that supports it?

Or will we be able to update drivers on our Android phones, and then it'll just work?

donpdonp 2 hours ago 2 replies      
It'd be interesting to see how this compares to scatternet that has been part of bluetooth since 1.0.
       cached 20 July 2017 19:02:02 GMT