hacker news with inline top comments    .. more ..    27 Apr 2017 News
home   ask   best   1h 53m ago   
GitPitch Markdown Presentations for Devs on GitHub and GitLab github.com
103 points by gitpitch  2 hours ago   38 comments top 20
gitpitch 2 hours ago 3 replies      
Hi @mankash666, thanks for your feedback.

GitPitch was indeed launched with developers in mind. Devs often need to present and promote their work. Having worked as a software consultant for over 20 years I can attest to this. And with the rising popularity of tech meetups and conferences now more than ever making it easy to clearly present concepts, designs, etc. right alongside the actual code in your repo is a big win.

GitPitch is also seeing wide adoption across academia, particularly as a tool for delivering course materials, again right alongside the code.

As a final note, you mentioned a perceived drawback. GitPitch presentations are indeed automatically made available online just as soon as you git-commit and push to GitHub, GitLab or Bitbucket. But if you really want to host the presentation on your own domain or under your GitHub pages a fully self-contained bundle for your presentation is available for download with one click. You can then take the contents of that bundle (HTML/CSS/JS) and deploy it on your own Web server. No problem.

xkr 2 hours ago 2 replies      
Second slide of their hello-world example:

 <!-- .slide: data-autoslide="2000" --> ### No more <span style="color: #666666">Keynote.</span> ### <span class="fragment" data-fragment-index="1" data-autoslide="2000">No more <span style="color: #666666">Powerpoint.</span> <br> ### <span class="fragment" data-fragment-index="2" data-autoslide="3500">Just <span style="color: #e49436">Markdown</span>. Then <span style="color: #e49436">Git-Commit</span>.</li>
Doesn't look like a simple markdown for me.

mankash666 2 hours ago 2 replies      
Reveal-Md is similar: https://github.com/webpro/reveal-md

The pitch for git pitch itself appears directed at developers, who apparently have the time and patience to write markdown code for slideshows, but don't have the patience to output html/js via something like reveal-md. The drawback is the inherent advertising of gitpitch. With reveal-md your slideshow can be uploaded as a static page(s) to your own domain, or even directly to GitHub pages.

didibus 1 hour ago 0 replies      
I highly recommend pandoc for this use case. Pandoc is amazing. I recommend it for all your markup. To write tech docs, books, presentations, blog posts, etc.

It lets you use Markdown to generate Html, Docx, Pdf, every other markup, and off course Presentations just like gitpitch.


It can also easily be extended to support more features.

izacus 10 minutes ago 0 replies      
Since conferences usually have flaky and broken wifis, does it work well offline?
sidcool 5 minutes ago 0 replies      
Very useful. Hope it is adopted as well ad README.md
shurcooL 1 hour ago 0 replies      
This reminds me of the present tool [1] that the Go project created, which I like and use for my slides these days. Main reason, which it shares with GitPitch, is the simplicity. It works for my basic needs, and I don't want anything more heavyweight.

For an example, see .slide source [2], and the presentation result [3].

[1] https://godoc.org/golang.org/x/tools/cmd/present

[2] https://github.com/golang/talks/blob/master/2012/chat.slide

[3] https://talks.godoc.org/github.com/golang/talks/2012/chat.sl...

andreineculau 1 hour ago 0 replies      
I have been doing this for years with remark. Works on any git service, because it only needs a url to a markdown file.

See http://andreineculau.github.io/go-remark/?//andreineculau.gi...Source: https://github.com/andreineculau/go-remark

rorosaurus 2 hours ago 2 replies      
I would love to see a version of this where you can deploy your slideshow to your own Github Pages, with no external dependency on GitPitch.com.
stephenr 1 hour ago 0 replies      
Maybe it's the mercurial-using cynic in me, but this seems vary barely related to Git, and much more related to Markdown.
autarch 1 hour ago 1 reply      
This is very cool. One thing that I wish the docs made clearer was that this is a wrapper around `reveal.js`, so if you're familiar with reveal, then learning this tool is just a little more effort.
alihcevik 46 minutes ago 1 reply      
Awesome! But I can't use it if I need to make my slides public. Do you support private repositories?
andrewguenther 1 hour ago 1 reply      
Please, please, don't mess with my page history. When I hit back in my browser, I don't want to go back a slide, I want to go back to the previous page.
ubitakken 29 minutes ago 0 replies      
Add a markdown file to your repo and get a presentation. Very nice!
willtim 2 hours ago 0 replies      
I've been doing this for years using pandoc and beamer.
Romanulus 1 hour ago 0 replies      
Marketing instincts came through; Gitch is waaaay to close to ginch.
jduckles 1 hour ago 0 replies      
Cicero http://cicero.xyz/ does this using reveal.js
orschiro 2 hours ago 1 reply      
It would be cool to also "pitch" Github Gists. :-)
MaxLeiter 2 hours ago 1 reply      
Can the topic be changed to include BitBucket like the README does?
LeicaLatte 1 hour ago 0 replies      
Very nice.
DJI Puts $145K Bounty on the Drone Pilots Who Were Disrupting Flights improdrone.com
85 points by chidog12  3 hours ago   36 comments top 11
abpavel 4 minutes ago 0 replies      
There are legitimate uses of airports by drone enthusiasts who rent out whole airfield for weekend gatherings and events. The clubs in my area fly almost exclusively this way. I wonder how the restrictions will fare in this case.
nullnilvoid 2 hours ago 4 replies      
> Lets not forget that DJI has software which limits the abilities of the drones based on the pilots location. For example, DJI has established that airports are no-fly zones. However, there are some ways that drone pilots can bypass this measure and fly without restriction from DJI.

DJI is going really far on this. They already have pre-installed software which restricts drones in no-fly zones. Even more, they are putting out a bounty program to that. I heard that there are some special electrical guns which can shoot down drones. It might be useful to deploy these in no-fly zones.

salimmadjd 2 hours ago 2 replies      
I'm so glad DJI is doing this. I've started flying drones in September of 2016. I use to be annoyed by them, but now as a pilot and a photographer, really love the unique perspectives drone footage gives me.

That said, I hear from many people who dislike drones (I can understand them since I was one, too) and all it takes is one or two people to ruin it for everyone else. So it's really good DJI is taking the lead on this.

zkms 1 hour ago 1 reply      
The solution is simple: allow low-power ADS-B to be radiated from drones and encourage/require its use: see http://www.uavionix.com/blog/the-case-for-low-power-ads-b/

Add a penalty for flying anywhere that could interfere any sort of crewed aircraft operations (including airports and flight paths, of course) without an operational and registered ADS-B transceiver. You fly next to big iron, you broadcast and listen to ADS-B like big iron, so nobody gets hurt.

glangdale 2 hours ago 8 replies      
Am I the only one getting heebie-jeebies over the prospect of drones being used for purposes of terrorism? It sounds like these guys (in the article) are just jack-asses but it doesn't seem hard to imagine bad actors doing quite a bit worse. I'm also scared of the prospect that autonomous drones might be considerably harder to stop and/or catch the perpetrators before or afterwards.

It also seems likely that this will be something that is relatively cheap and widely available - you might have to be a serious actor lay hands on a SAM, but a lone nutter can probably afford a drone or ten.

nitin_flanker 2 hours ago 0 replies      
I think the website is down. Here's the cached one: http://webcache.googleusercontent.com/search?q=cache:h9-Uy1z...
astrodust 1 hour ago 0 replies      
Drones are nothing of not noisy, and that noise is unlike almost any other piece of machinery.

It shouldn't be too hard to set up towers around the airport with sensitive speakers that can triangulate the location of a drone flying within a restricted area. From that point it can sound an alarm and/or deploy counter-measures.

bitmapbrother 56 minutes ago 0 replies      
Good on DJI, but China should be instilling fear on anyone who even thinks about doing this in the future. I can only imagine the penalty they would impose for causing a crash that took lives.
refresh99 1 hour ago 0 replies      
When people start getting jail time for flying in restricted airspace the events will drop. Idiots pointing laser pointers at aircraft used to be pretty common until people started landing in jail because of it. Trying to say drones need transponders and such is non-starter because the same people who will be so ignorant to warrant them wouldn't bother to spend the money on one in the first place and follow those regs.

Once a few people get the book thrown at them word will spread and people will start using their brains a bit more before they take off.

QuercusMax 2 hours ago 1 reply      
Link is broken - anyone have a mirror?
ohashi 2 hours ago 2 replies      
Is this a problem just in China or what about airports around the world?
Googles PatentShield helps startups fight patent lawsuits in return for equity techcrunch.com
107 points by shard  6 hours ago   40 comments top 9
samaparicio 3 hours ago 3 replies      
A lot of startups get sued by Non Practicing Entities (trolls) that have no operating business to speak of. That provides the slimmest counterattack surface - so not sure how useful it is to have a portfolio to hit back.

I think a better defense mechanism would be a legal defense fund that would force the trolls to make their case at trial, to go through discovery, to bring expert witnesses.

Because the law firms that represent them work on contingency, this would effectively cut into the potential licensing fees, and make trolling less profitable (and less likely).

Also, a lot of trolls extract patent licensing fees out of startups for patents that should have never been granted, and that deserve to be invalidated (e.g with prior art), but the process of getting a patent thrown out is expensive, so having a fund would greatly help.

The other strategy that could work is to get all the startups that get sued by a troll for a specific infringement and make a sort of "reverse class action" - making it possible for the startups to re-use the same lines of argument, evidence, etc in their cases.

jacquesm 3 hours ago 3 replies      
Google is taking a leaf out of the Mafia playbook here. Classy. Nice start-up you have there. Would be a shame if one of these pesky trolls sued you. But if you join our organization as a partner you will be protected.
CalChris 3 hours ago 0 replies      
So basically you give up an unstated x% ahead of time for access to Google+Intertrust's portfolio. A few thoughts.

First, this only provides access to the patent portfolio. It doesn't pay the (considerable) litigation bills.

Second, this portfolio is already available. If Alice sues Bob, Bob can negotiate just in time with Intertrust or IBM or ... for access to a defensive/offensive portfolio. This acquisition of patents during litigation is common practice.

Third, just as Intertrust is going to do their due diligence on you, you are taking a risk that their portfolio is a good match for your risk. You need to do your due diligence on them and on their portfolio.

I might go for this, but it would be at a pretty low percentage, like less than a percent.

dis-sys 3 hours ago 2 replies      

patent laws are so broken that patent trolls are everywhere suing startups to get $ from them.

google now has a business that can directly benefit from such increasing number of patent trolls.

sounds not very inspiring to me.

who is always on the losing side? average small companies. who is screwing the economy & innovation and show no sign of change? governments who refuse to actually reform such a 100+ years old broken system.

joelthelion 49 minutes ago 0 replies      
When you need this I think we can safely say the patent system is broken beyond repair. This is basically extortion (note that I'm not blaming Google!).
anon374939 3 hours ago 3 replies      
Its always nice when the private sector figures out a way to solve a problem that should be solved by government, but government is incapable or unwilling to do so.
partycoder 1 hour ago 1 reply      
Patent law needs reform.

If you analyze the life of important inventors and innovators of the 20th century, there has always been some patent pain involved that is not in the interest of the "greater good".

The reason America didn't have a significant air force for WW1 compared to other powers was in part due to the legal battles between Wright and Curtiss. The government intervened in the patent driven battle so planes could mass produced and used in the war effort.

Then, the inventor of the TV (Farnsworth) got sued by radio manufacturers and could never actually profit from his creation.

Then, many patents get extended for excessive periods of time to prevent things becoming public domain (e.g: Disney).

EGreg 1 hour ago 0 replies      
Why don't we have something like the open source movement in drugs? Using PATENTLEFT. All those possible inventions for the long tail if people were allowed to build on top of existing discoveries.
aanm1988 2 hours ago 0 replies      
Gonna go ahead and (once again) be the naysayer when it comes to google. This just makes google a very effective new form of patent troll.
India Is Winning Its War on Human Waste gatesnotes.com
567 points by gauMah  14 hours ago   220 comments top 29
avar 12 hours ago 19 replies      

 > Unfortunately, in many places, its not > feasible to lay down sewer pipes or build > treatment facilities. [...] But giving > people access to toilets isnt enough. You > also have to persuade them to use the > toilets.
I can't find it now, but there was a news video making the rounds a year or two ago that showed that this problem is much more fundamental than that.

It showed a Indians in a tiny village who were falling ill because they were literally taking a shit in the same river that they were getting their drinking water from, just a few meters away.

These people all knew each other, and even if they didn't have any toilets or basic infrastructure I would have thought that something as basic as "if you shit where you drink, you get cholera" would be common knowledge anywhere in the world by now.

Of course it would have been nice for those people to have sewer pipes, toilets etc. But in that case the problem could have been solved with a few shovels, and a marked area indicating where you should be going to do your business, preferable in some open field far from the drinking water.

For those people toilets would be nice, but unnecessary. They clearly all have a shared interest in not drinking each other's shit. If by some magic they aren't aware that mixing shit with water leads to bad consequences that seemed to be solvable by some one-time government presentation on the consequences of them keeping doing what they were doing.

But somehow the problem persists, it's unbelievable.

nojvek 13 hours ago 6 replies      
Two of the most forward thinking folks I believe are Bill Gates and Elon Musk. While Elon wants to carve way into science fiction, Bill wants to ensure no one gets left behind. Exciting time to be alive.

In this case big kudos goes to Narendra Modi and the Indian govt to ensure this happens. I believe such fundamental things are the most effective when govt pushes for it rather than individuals.

theprop 19 minutes ago 0 replies      
This is probably the single best way to reduce rape in India as well. I remember in one Indian tv serial episode, the plot was a woman who refused to marry a man (in a village) until he got indoor plumbing in the house so she didn't have to go to a field at night to use the bathroom.

India was one of the first countries in the world to emphasize regular bathing -- this was thousands of years ago, even the kings among the Europeans started bathing daily fewer than 300 years ago. We need to get hygiene country-wide to the world standards it created.

blhack 12 hours ago 2 replies      
It's outlined in the article, but I think it's worth saying again: the most difficult part of what they're doing seems to be getting people to change their habits.

It's the same psychology involved in pollution and climate change. Because people don't see an immediate reaction to for instance plastic-pollution, it's harder to get them to understand the serious effects that it is having.

sytelus 10 hours ago 1 reply      
The reason toilets don't get used even when available in India is that Indian version of toilets are not very maintainable. First every one needs to get water from somewhere and carry it all the way with them to toilet. If water turns out not to be enough then toilet retains the waste, start becoming smelly and unhygienic. Also because of extensive water use, its unpleasant place to walk around. Toilets with flushing system uses 1950s mechanical system and often breaks down easily or gets plugged easily with no way to unplug it unless maintainer comes around (who often doesn't exist).

So the basic problem is Indian toilet tech and the whole process has not been evolved. In Western world and especially places like Japan, there are lots of people working on innovations in this area and things keep improving. In India, this area is considered to be assigned to lowest members of social cast system and thinking about it or working on it by intellectuals is considered taboo.

linux_devil 13 hours ago 3 replies      
This village in India is the cleanest village in asia, sharing one article:http://www.bbc.com/travel/story/20160606-the-cleanest-villag...
vthallam 13 hours ago 1 reply      
Glad this initiative is working. The Indian govt emphasizes on this very frequently and I am surprised to see this dashboard: http://sbm.gov.in/sbmdashboard/Default.aspx
arcticbull 13 hours ago 1 reply      
Based on my walk to work along Mission street, I'd say we've got something to learn X_x
walrus01 7 hours ago 0 replies      
I recall seeing a video of a property owner who has a piece of land with a long, approx 2.5 meter height wall facing a street in a suburb of Delhi. He had a significant problem with the wall becoming an unofficial "designated pissing wall" and the whole place reeked of urine. Problem was solved by first power washing the wall and then hiring a few local mural painters to paint images of various Hindu gods (Hanuman, Ganesh, etc) on the wall. No more pissing problem.
agustamir 12 hours ago 1 reply      
I hope that all these toilets being built are actually being used, serviced and maintained by the people of the community. I have seen TV reports where these toilets were being built to achieve targets and end up being either filthy because no one maintained them, or used as storage(?). This, along with a massive awareness drive(the govt has some ads running on tv) to push people towards using these toilets. Many challenges ahead, and I can only hope for the best for the motherland.
thowbit9 12 hours ago 1 reply      
This has nothing to do with building toilets. Its with education. The northern parts of India has very low literacy rate. High literacy + HDI states like Kerala and Meghalaya was already clean. The dashboard of 2014 data proves that.
blobbers 11 hours ago 1 reply      
India is Winning Its War on Human Waste!!!

... unfortunately San Francisco is losing that same war.

criddell 12 hours ago 0 replies      
danellis 3 hours ago 1 reply      
I guess I'm confused as to why this is only a problem for women and girls. Are there toilets that the men use that they don't let the women use? Or is it that Indian men don't have a problem with themselves defecating anywhere?
Taylor_OD 10 hours ago 2 replies      
Only 1.7 million people die from unsafe water? That's a lot less than I thought. It's still a awful and fixable issue but not as bad as I thought.
thowbit9 12 hours ago 0 replies      
Wow very proud of my state Kerala which looks the cleanest. Kerala always stands out from rest of the India in all positive aspects.
pm90 13 hours ago 0 replies      
Heh, I think Gates might have inadvertently DDoS'd the dashboard detailing toilet coverage in Rural India. I won't post the link here but its at the end of the article.
jwilk 11 hours ago 0 replies      
Archived copy, which can be read without JS enabled:


Abishek_Muthian 4 hours ago 0 replies      
It's worth to note that to fund the Swachh Bharat mission business in India are contributing 0.5% towards it via service tax on every bill. It's a collective effort of public-govt-private partnership.
baron816 12 hours ago 0 replies      
Wait, did Bill Gates really go out to the train tracks in India and film people pooping?
throwaway12837 2 hours ago 0 replies      
HN is one step away from becoming 4chan.
dafrankenstein2 12 hours ago 0 replies      
then finally its happening? once we heard that India has more phone users than toilet users.
jimmykennedy 13 hours ago 1 reply      
kahrkunne 7 hours ago 2 replies      
soperj 13 hours ago 2 replies      
thowbit9 12 hours ago 1 reply      
What I see and hear from the people on the field is that Narendra Modi is a show man with no stuff. Dashboard looks cool, but those toilets built are not maintained and is in the verge of wreck.
zerop 13 hours ago 1 reply      
Narendra Modi, a person who can change things in India.. Because he knows Sankhya.
jarmitage 6 hours ago 0 replies      
Read the comments of this article.

I think the 'poor' of India should be telling Bill Gates where to shit and not the other way around.

noiv 12 hours ago 4 replies      
Request to de-weaponize the title.

War should denote humans fighting humans only - not humans fighting physics.

Swift architecture at Uber skilled.io
183 points by tsycho  8 hours ago   106 comments top 18
discreteevent 9 minutes ago 0 replies      
"First, you have to be aware that structs can increase your binary size. If you have structs into lists they are created on the stack and they can increase your binary size."

I don't get this. Is it saying that structs can increase your binary size and as a separate issue they are created on the stack. Or is it saying that because structs are created on the stack they can increase your binary size? How would that work if stack allocation is something that happens at runtime and affects your memory footprint rather than binary size? (I don't use swift so I might be missing something here)

MooMooMooney 7 hours ago 2 replies      
"Lastly, we started combining files, and we found out that combining all of our 200 models into one file decreased the compilation time from 1min35sec, to just 17sec. So we are like, "Hold on, this is interesting, combining everything into one makes it much faster." The reason for this is that, as much as I know, that a compiler does type checking for every single file. So if you spawn 200 processes of Swift compilers, it needs to 200x check all the other files and make sure that you're using the correct types. So combining everything into one makes it much faster. "

Good to know

hota_mazi 3 hours ago 2 replies      
> The reason for this is that, as much as I know, that a compiler does type checking for every single file. So if you spawn 200 processes of Swift compilers, it needs to 200x check all the other files and make sure that you're using the correct types.

I'm a bit baffled by that. Is the Swift compiler that naive?

Surely you know how to assess the number of processors/cores on your system and you spawn threads in a way that doesn't lead to diminishing returns. You use a bounded thread pool and you stay within these bounds.

Seeing such a speed up from simply merging text files is really puzzling to me. You have to type check the code anyway, surely the overhead of opening a new file is completely negligible compared to running this type checker on the same file? Especially since all these instances of the type checkers have to share a lot of data anyway.

Swift is very cool and I'm excited to see it and Kotlin become our next generation languages for mobile, front end and back end alike, but it seems to me the Swift compiler is still very immature.

andymatuschak 1 hour ago 1 reply      
One other big contributor to app size we've noticed at Khan Academy: extensive use of value types, particularly ones whose fields require more than a couple words of storage.

The big picture observation is that for value types with storage larger than a few words, several instructions must be emitted per call and per storage word because they cannot be passed-by-value in registers. And Swift often emits more calls than you can see (e.g. thunks, protocol witnesses, weak sentinels, etc). This is not new to Swiftthe same requirement exists when passing large C values aroundbut we use value types a lot more in Swift for various reasons, so the issue becomes more salient.

In terms of remediation, I audited our app for all structs larger than a few words. I just did this manually; there were something like 120 structs to look at. For each, I converted it to a class then evaluated the impact on generated code size. Only four structs had meaningful impact on generated code size (to the tune of ~13MB), and happily, they were fully immutable, so they retained their value semantics even when converted to classes. If they had not already been fully immutable, I would have had to spend some time either adapting the classes to achieve value semantics, or adapting their clients to tolerate reference semantics.

Then I audited our app for all enums larger than a few words. These can be made pass-by-reference by using Swifts `indirect` feature, which implicitly boxes associated storage. We had one enum for which this made a substantial difference, to the tune of several MB.

Then I had to make sure runtime performance hadnt been too badly damaged by all the new dynamic allocations and indirections. In the end, I observed nothing noticeable. We dont have formal repeatable performance tests, thoughit would have been interesting to see the impact on those.

C++ has this issue, too; it largely handles it by using reference arguments when consuming large stack values. In the future, its possible for Swift to optimize many cases (especially intramodule) where this occurs by allowing deeper stack frames to reference values types stored in parent stack frames when it can prove its safe. Rust has a lot of fanciness here you might find interesting!

bsaul 35 minutes ago 0 replies      
Funny he didn't mention a command line tool that gives compile time per function ( explained here http://irace.me/swift-profiling ). That proved to be the greatest help in my case to reduce compile times drastically.

He did mention running a tool to add explicit types everywhere, but it's very often a matter of just writing the most generic ones. Maybe not in uber case, but for everybody else you should try it.

jordansmithnz 6 hours ago 0 replies      
In regards to the tool mentioned that provides information about binary size contribution... ("If you want to see this open-source. Just scream out loud")

I am screaming out loud. Please open source this!

Entangled 6 hours ago 2 replies      
> Android engineers are more welcome now. Especially if they write Kotlin.

I love Swift, and I love Kotlin.

To any young programmer out there (I'm in my fifties), learn these two languages and you will be highly employable for the next decade, on top of the wave.

dankohn1 5 hours ago 0 replies      
I feel very appreciative to have Uber working through all these bugs in the tooling so that the rest of us can take advantage once things are more reliable.
mgoblu3 4 hours ago 1 reply      
Does anyone have any other good articles/examples of maintaining large iOS applications like this? I found this super helpful and interesting to my current project so was also interested in other cases like this
perfmode 7 hours ago 0 replies      
I would love to see an implementation of the router component.
nnain 2 hours ago 0 replies      
It's so great that they shared all these findings: I hope the XCode team takes notice and fixes some of the nagging compile time and indexing issues!
santaclaus 6 hours ago 3 replies      
Why didn't they use react native?
xyzzy4 7 hours ago 8 replies      
It seems the Uber app was extremely over-engineered. Call me crazy but I don't think you need 100 engineers to recreate the front-end of Uber.
JustSomeNobody 6 hours ago 2 replies      
asimpletune 7 hours ago 0 replies      
Awesome post!
beaconstudios 6 hours ago 2 replies      
hm, so they rewrote the whole platform from scratch in <totally hip language of the month> and it didn't all crash and burn? That's kind of surprising - this is usually a really stupid idea because you often end up mostly solving the problems of the v1 architecture but introducing a whole bunch of different, equally painful problems - but with the added headache of the whole codebase being newish.
unpopular11333 6 hours ago 0 replies      
I was hoping to find something about how the Swift architecture affected their ability to cleanly implement a tipping feature, since they've been a tad behind the curve on this very highly demanded functionality (just pinging my general group).
fred_is_fred 5 hours ago 0 replies      
It's really irritating that Apple picked the name of an existing software product for their language. When I saw "<large Company> swift architecture" I was pretty excited to see how they were using object storage.
Should I buy ECC memory? (2015) danluu.com
222 points by colinprince  10 hours ago   163 comments top 24
nostrademons 10 hours ago 3 replies      
While I was at Google, someone asked one of the very early Googlers (I think it was Craig Silverstein, but it may've been Jeff Dean) what was the biggest mistake in their Google career, and they said "Not using ECC memory on early servers." If you look through the source code & postmortems from that era of Google, there are all sorts of nasty hacks and system design constraints that arose from the fact that you couldn't trust the bits that your RAM gave back to you.

It saved a few bucks in a time period where Google's hardware costs were rising rapidly, but the ripple-on effects on system design cost much more than that in lost engineer time. Data integrity is one engineering constraint that should be pushed as low down in the stack as is reasonably possible, because as you get higher up the stack, the potential causes of corrupted data multiple exponentially.

olavgg 9 hours ago 4 replies      
Can people here please stop posting that ZFS needs ECC memory. Every filesystem, with any name like FAT, NTFS, EXT4 runs more safe with ECC memory. ZFS is actually one of the few that can still be safer if you don't run with ECC memory. Source: Matthew Ahrens himself: https://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=...
blackflame7000 10 hours ago 3 replies      
Altitude also plays a factor in random memory corruption.

From the wikipedia article on ECC Ram, "Hence, the error rates increase rapidly with rising altitude; for example, compared to the sea level, the rate of neutron flux is 3.5 times higher at 1.5 km and 300 times higher at 1012 km (the cruising altitude of commercial airplanes).[3] As a result, systems operating at high altitudes require special provision for reliability."

spullara 8 hours ago 4 replies      
I reproduced this by bit-squatting cloudfront.net after reading about it. So many memory errors!


Loved the variety as well. Sometimes though requests came to me the Host header was correct!

ReligiousFlames 7 hours ago 1 reply      
An old article from DJB worth perusal: http://cr.yp.to/hardware/ecc.html

It's also worth noting that not all ECC (SECDED) is created equal: ChipKill and similar might not survive physical damage because of likely shorts of the data bus but a single malfunctioning chip producing/experiencing higher hard error rate is possible from which to recover.

Also, it'd be really cool if some shop a-la BackBlaze blogged about large-scale monitoring for soft and hard RAM errors across chip/module modules (+ motherboards & CPUs). Without collecting and revealing years data from real use, conversation devolves into opinion and conjecture.

Finally, not all use-cases can benefit from ECC (ie Angry Birds) however there are some obvious/nonobvious ones that can (ie router non-ECC DNS bitsquatting or processing bank transactions).

veidr 7 hours ago 1 reply      
Yes. Everybody reading this should use ECC RAM, and non-ECC RAM should be called "error-propagating RAM".

Random bit flips aren't cool, and they happen regularly. Most computers that have ECC RAM can report whether errors happen. I see them at least once a year or so. For instance, here are 2 ECC-correctable memory errors that occurred just last month.

Cosmic rays? Fukushima phantom? Who knows. You'll never know why they happen (unless it's like a bad RAM module and they happen a lot), but if you don't rock ECC you will never know they happened at all. You'll be left guessing when, years later, some encrypted file can no longer decrypt, and all the backups show the same corruption...

[1]: https://www.dropbox.com/s/zndvy3nkv1jipri/2017-03-20%20FUCK%...

[2]: https://www.dropbox.com/s/6yeoedc7ajzq4u9/2017-03-20%20FUCK%...

mjevans 9 hours ago 2 replies      
A better question is why /shouldn't/ you use ECC memory?

Generally the answer to this is any context where you legitimately do NOT care about your data at all, but you still care about costs. This predominately devolves in to consumption only gaming systems.

In all other cases everyone would be better served (in the long run) by buying ECC RAM.

VA3FXP 10 hours ago 3 replies      
Depends on what you are doing.ZFS storage servers: Hell yesHigh-value data in my DB? Hell yesemail server: Nopesuper cool gaming rig: Nope* Cluster: Hell yes

General office workstation: maybe.

I don't have the budget for 20 redundant copies.I do have the budget for slightly more expensive RAM.Especially on my ZFS storage arrays.

ECC memory is like Insurance. You hope you never need it. One real downside that I have found, is finding out _when_ that memory correction has saved your ass. RAID arrays can alert you when a disk is dead. SMART mostly tells you when disks are failing. I haven't found a reliable tool to notify me when I am getting ECC errors/corrections.

intrasight 1 hour ago 0 replies      
I am typing this (finally!) on my new desktop build. I did mull over the decision for a while but finally went with Xeon and ECC. So the memory cost more - perhaps even twice as much - so what? I use my computer pretty heavily for my work - with several VMs running at a time. If ECC saves me a headache once a year, it will have paid for itself. If it never provides ANY benefit I will still not regret the peace of mind.
lucb1e 8 hours ago 0 replies      
This article is gold in so many ways. It contains interesting bits of information on ECC, company history that I didn't know (Sun's and Google's namely), filesystem reliability (I never knew!), the physics of RAM (50 electrons per capacitor)...

It's a must read, even if only to get you thinking about some of these things.

anonymous_iam 10 hours ago 1 reply      
The article makes no mention of single event upsets (SEUs). These occur randomly when cosmic rays can cause a bit flip anywhere in the chip. ECC is a good way to mitigate SEU effects.
notacoward 7 hours ago 0 replies      
Same topic, same conclusion, even more hard facts.


epx 5 hours ago 0 replies      
I think that, given the personal importance of computing devices and storage, no filesystem should exist w/o checksum of metadata+data, and no RAM should be without ECC. The slight increase in cost does not justify the risk.
danielfaust 7 hours ago 0 replies      
I've spent the last two weeks looking at Memtest86+ trying to figure out if either one of my memory modules is damaged, or if it is the motherboard. These tests take a long time, and yield different results from day to day.

I've decided to never ever again buy non-ECC memory, at least not on 24/7 servers as well as on workstations.

In a gaming machine / visual typewriter? Sure, non-ECC memory is ok.

mixmastamyk 3 hours ago 0 replies      
I searched for a good Linux laptop recently with ecc but didn't find much so settled on a kaby lake i5. Does anyone make them?
myrandomcomment 1 hour ago 0 replies      
Yes. Are we done :)
ori_b 10 hours ago 0 replies      
If you can afford it, sure. That's one reason why I'm so happy Ryzen supports it on consumer processors: It makes ECC cheap.
ddingus 10 hours ago 3 replies      

Bit errors are uncommon and range from benign to crash.

Your storage has them, memory has them, network has them.

Non error correcting memory very significantly increases risk.

And this is the kind of risk you don't notice, until you do and when you do, it's often subtle, insidious, impossible to track down.

Servers absolutely. It's debatable on desktop, but we have huge RAM now. Might as well error correct. The bit error risk is small. Bigger RAM only adds to that possibility.

omash 7 hours ago 0 replies      
What are the odds of memory errors causing hard disk corruption / boot failure?
sitkack 6 hours ago 0 replies      
I want to thank Jeff for assisting Dan in writing this article.
Splendor 9 hours ago 1 reply      
exabrial 9 hours ago 0 replies      
Yes. Everyone does.
saganus 10 hours ago 1 reply      
"<pubDate>Fri, 27 Nov 2015 00:00:00 +0000</pubDate>"

Needs (2015) added to the Title I think.

godzillabrennus 10 hours ago 1 reply      
Do you use ZFS? If yes then you should use ECC memory.

Do you have a use case where you would want your computer to alert you when the ram is failing? If yes then you should use ECC memory.

Otherwise it's a nicitey and probably not worth the money.

Hackers exploited Word flaw for months while Microsoft investigated reuters.com
71 points by T-A  6 hours ago   16 comments top 4
scarybeast 1 hour ago 0 replies      
And _this_, ladies and gentlemen, is why we have disclosure deadlines for security vulnerabilities. For example, Project Zero expects vendors to fix security vulnerabilities within 90 days of notification.

Looking at this story, it's possible that 90 days is almost too long and should be shortened. As time goes on, it's becoming more and more common for multiple parties to become aware of the same vulnerabilities. Not all of those parties have good intentions, as we see here. Shortening the window of exposure is key.

stesch 2 minutes ago 0 replies      
Microsoft should hire this programmer: https://i.redd.it/sd72mfmj7ety.jpg
doggydogs94 23 minutes ago 1 reply      
Vendors, even ones as large as Microsoft, do not have infinite resources available to evaluate vulnerabilities. There are only so many of the issues you can work on at once. They have to evaluate each issue and prioritize the fix. In this case, they merely did not recognize the potential scope of the problem at hand.
gwu78 1 hour ago 4 replies      

The strange "counterargument" I commonly see on HN to any suggestion that Microsoft closed source software could potentially be unsafe for use on an internet-connected computer is that the company has "improved" since some earlier 1990's/2000's time period.

Are these commenters suggesting that other, open source operating system choices have not also improved since that time period? Should one consider how much did each respective system need to improve?

(By "other, open source operating system choices", I mean the ones that were able to connect to the internet for years before Gates decided the www was something his company should be interested in and to copy the TCP/IP stack from an open source kernel into the Windows kernel).

Are there convincing arguments why Microsoft deserves special treatment compared to the open source alternatives, i.e., why their users should not be permitted to freely evaluate the Windows kernel or Office source code via the public web? Are there compelling reasons why MS users should not be allowed i.e. given the option to edit/remove source code they are uncomfortable with and recompile? Consider the effects of limiting the number of people who can find and fix defects in a product.

Does closed source status of Windows make Microsoft's software superior to the longstanding open source operating system alternatives?

Linux Programming: Signals the easy way stev.org
99 points by dkarapetyan  7 hours ago   18 comments top 3
RcouF1uZ4gsC 3 hours ago 0 replies      
Boost ASIO makes signal handling pretty painless in C++. It handles the signal and calls a callback in which you can use any function, not just signal safe functions.


bigger_cheese 5 hours ago 2 replies      
Is Signalfd viable at all (http://man7.org/linux/man-pages/man2/signalfd.2.html) supposedly it was created as a better alternative i.e you can poll for signals from it. Has anyone tried using it?
shmerl 6 hours ago 3 replies      
That's needed for C/C++. I wonder, how something like Rust for example deal with it? Should one also worry about it by default?
New Evidence Suggests Humans Arrived in the Americas Earlier Than Thought npr.org
195 points by el_duderino  13 hours ago   110 comments top 13
Retric 13 hours ago 6 replies      
It's only 50 miles by sea from Eurasia to the Americas. People got to Australia around 40,00060,000 years ago by boat so it seems unlikely that people needed a land bridge to get to the Americas.

Remember, the globe also looks like this: https://en.wikipedia.org/wiki/Eskimo#/media/File:Inuit_conf_...

PS: I suspect the Americas where well known in the 1700's, but nobody was talking to the Russian Far East when map making.

tzs 12 hours ago 9 replies      

1. a site that appears to have been used by humans to do some kind of processing of bones, and

2. very old (130k) bones at that site that have been processed,

how would you rule out the possibility that the site was made and the bones processed by people that came much later (say, 15k years ago) who found a bunch of 115k year old bones from animals that died from natural deaths and used them?

meri_dian 12 hours ago 1 reply      
Notice that the article states the bones themselves are from 130kya, not that they were discovered in a layer of sediment deposited 130kya. This means that the age of the bones says nothing about the tools scattered around them at the site, or the ancient users of those tools.

Taking this into account, I think a more plausible explanation is that ancient humans - Clovis ancient (13kya) or perhaps older (20kya) - discovered these bones and kept them, used them, then discarded them or lost them with the other stone tools from the site.

Will_Parker 13 hours ago 3 replies      
For a skeptical point of view, see the second half of Jared Diamond's article: https://www.edge.org/response-detail/27111
takk309 11 hours ago 0 replies      
More coverage of this can be found on ArsTechnica. A few more details are given that the NPR report omits.


goodcanadian 13 hours ago 1 reply      
You can count me firmly in the skeptics category. However, it is interesting, and human beings have been around a long time. It would be a bit surprising if no one ever made it to the Americas until "recent" times.
vivekd 4 hours ago 2 replies      
So lets look at the evidence:


This site is an outlier among archaeological sites in terms of the date. The date given for Californians according to this research is 130 000 year ago.

As far as I am aware, the earliest recorded human site outside of Africa is one in China 120 000 years ago.

And even that site is an outlier because human migration into the middle east began happening 120 000 -100 000 years ago, 60 000 years ago into Asia, and and 45 000 years into Europe.

Now that in itself is not conclusive, maybe there are older sites that we haven't found and this discovery pushes back human migration. Except there are other problems here.


So the scientists say the rocks have markings that look like they had been used as a hammer and anvil and there are some mastodon bones near it.

Now no actual human bones were found near this site. Which is certainly not impossible for these kinds of sites. More problematic is that there were no obvious stone tools other than rocks with markings that look like they'd been hit.

With ancient humans, especially ones capable of hunting mastodons, you would expect them to make sharp stone points by flaking (hiting one stone against another). This should leave flakes and tools on the ground. Yet, at this site, there were no stone tools of this sort found anywhere at this site.


Further, once ancient humans killed the mastodon, you would expect them to cut pieces of flesh off with a stone knife so that it could be eaten. This process would leave marks on the bones where the flesh had been cut. The article tells us that no such markings were found on these bones.

> Yet there were no cut marks on the bones showing that the animal was butchered for meat.


None of these factors itself is fatal, but all three combined, the outlier date (way to early considering the mainstream evidence shows that our species were just getting set up in the middle east at this time,) the absence of human tools nearby, the absence of cut marks on the bones, the absence of any human skeletons or remains in the area. All of these lead me to think there must be another explanation.

It's not impossible that the bones of dead mastodons can get trampled or broken. Under the right circumstances, it's not impossible that trampling can cause rocks to smash against one another or leave marks on the rock. Maybe there was a cliff in the area many years ago and the bones and the rocks took a deep fall. In any event, there are alternative explanations and it is not inconceivable that this happened through natural means. This is hardly smoking gun evidence of human in California 130 000 years ago.


And I think the weakness of this evidence is belied by the unusual explanatory leaps made by the discoverers. When pointed out that there is no evidence that the remains had been butchered for meat, he responds:

"The suggestion is that this site is strictly for breaking bone, to produce blank material, raw material to make bone tools or to extract marrow." Seriously? Why would they break bones without butchering the meat and eating it? Why would you have a separate site just for bone marrow? That makes no sense.


>He says the rocks showed clear marks of having been used as hammers and an anvil.

Really, I mean they have pictures of the rocks on the site. It doesn't seem that clear to me that they were hammer and anvil stones. Are they clear to anyone else? They could just easily be old worn rocks. I would expect humans that could make it from Africa or wherever all the way to California 130 000 years ago to be somewhat advanced enough that they have tools more advanced than just two random pieces of rock to bang together. This was the age where humans had started molding rocks for suited purposes by flaking them. Why they would just use two natural rocks instead of created rocks is questionable. If you type in acheulean tools into google, you can see the sophisticated flaked tools (sharp looking leaf shaped blades) our ancestors were making 1.7 million years ago . . . yet ones that traveled all the way to California and kill mastodons are still stuck banging random un-molded rocks against each other for marrow- why?

I have nothing against pushing back migration to Americas, but this site is probably more wishful thinking on part of its founders than legitimate discovery.

WalterBright 12 hours ago 2 replies      
I think it was Nova a while back that suggested that clovis technology came from a stone age french tribe, and that there were traces of them in the genes of eastern native americans.
cucucfudud 12 hours ago 4 replies      
That cant be right. Humans didnt leave africa until 100 thousand years ago.

If this is real there should also be more evidence of them. Not just the this one find.

nickjarboe 10 hours ago 0 replies      
The journal article in Nature can be found here[1]. The abstract is publicly accessible with the article behind a paywall. I'm not familiar enough with the dating method to comment on the possible problems of the age determination, but it was calculated by the complicated method of "230Th/U radiometric analysis of multiple bone specimens using diffusionadsorptiondecay dating models indicating a burial date of 130.79.4 thousand years ago."


redsummer 13 hours ago 1 reply      
There's a lot of Sasquatch stories among Native Americans.

Homo Floresiensis overlaps with the Ebu Gogo myth:https://en.wikipedia.org/wiki/Ebu_gogo

aaron695 7 hours ago 0 replies      
I really wish I could bet against this XKCD style its quite a sure thing.

It doesn't make sense plus we know there are competing issues at play as people try to get get know in their fields by finding 'the oldest'

Extrodinay claims need extrodinay evedence here they seem to have fit a theory that would lack provable evedence as a explanation.

julienchastang 12 hours ago 0 replies      
ECREE. "Extraordinary claims require extraordinary evidence." -Carl Sagan
The Myth of a Superhuman AI backchannel.com
160 points by mortenjorck  13 hours ago   206 comments top 46
khiner 2 minutes ago 0 replies      
I didn't read this at first because I thought it just sounded like it would be some opinionated clickbait full of strawmen and superlatives. But then I caved in and read it and it turns out that my instincts were right. Then I saw the author was Kevin Kelly and I felt sad. Almost as sad as when Steven Hawking said we should discontinue the SETI program because it is most likely that aliens will want to harvest our precious carbon and organs and enslave us if they found out we were here. UNSUBSCRIBE!
_greim_ 7 hours ago 4 replies      
> Temperature is not infinite [...] There is finite space and time. Finite speed.

Just want to point out this is true, however these things go astronomically high.

> what evidence do we have that the limit is not us?

We can measure the speed impulses travel through neurons, and compare that to, say, the speed of electrical impulses through silicon or light through fiber.

We can find the maximum head-size that fits through a vaginal canal, or the maximum metabolic rate a human body could support, and try to determine if these factors imposed any limitations on intelligence during human evolution.

We can look at other evolved/biological capabilities, like swimming or flying, and compare them to state-of-the-art artificial analogs, and see if a pattern emerges where the artificial analogs tend to have similar limitations as their biological counterparts.

MR4D 3 hours ago 2 replies      
First, let me say that I'm generally a Kevin Kelly fan.

That being said, I think his article shows extreme arrogance for one simple reason: To suppose that superhuman AI (AI smarter than us) won't exist is roughly the equivalent of saying that humans are at the limit on the spectrum of intelligence. Really? Nothing will ever be smarter than us?? Highly doubtful.

That should stand on its own, but I have other critiques. For instance, why does silicon have to be assumed? Why not germanium or graphite, or something else? I have little faith that a CPU circa 2050 will be built exclusively on silicon. By 2100, no way.

Second, there is a simple definition of intelligence that is applicable to many forms: intelligence is the ability to recognize patterns and make accurate judgements / predictions based on previously seen patterns. The higher the accuracy or the more complicated the pattern, the higher the intelligence.

My final point of contention is the idea that AI must emulate human thinking. Why? Maybe human thinking sucks. Maybe Dolphins have much better intelligence, but due to a lack of opposable thumbs, they don't rule the world like we do. And lest you think that less intelligent species can destroy others, could you really doubt that roaches and ants will be extinct before us?

Houshalter 7 hours ago 9 replies      
This is completely silly. Superhuman AI is inevitable because there is nothing magical about human brains. The human brain is only the very first intelligence to evolve. We are probably very far away from the peak of what is possible.

Human brains are incredibly small, a few pounds of matter. Any bigger and your mother would be killed giving birth or you would take 10x as long to grow up. They are incredibly energy constrained, only using a few watts of power. Because any more and you would starve to death. They are incredibly slow and energy inefficient; communication in the brain is done with chemical signals that are orders of magnitude slower than electricity and use much more energy. And they are very uncompact - neurons are enormous and filled with tons of useless junk that isn't used for computation. Compared to our transistor technology which is approaching the limits of physics and built on an atom by atom scale.

That's just the hardware specs of the human computer. The software is hardly better. There are just more unknowns because we haven't finished reverse engineering it (but we are getting there, slowly.)

But beyond that, the human brain evolved to be good at surviving on the Savanahs of Africa. We didn't evolve to be good at mathematics, or science, or engineering. It's really remarkable that our brains are capable of such things at all! We have terrible weaknesses in these areas. For instance, a very limited working memory. We don't realize how bad we are, simply because we have nothing else to compare ourselves to.

Consider how even today, relatively primitive AIs are vastly superior to humans at games like chess. Human brains also didn't evolve to be good at chess after all. Even simple algorithms designed specifically for this game easily mop up humans. And play at a level of strategy far above what even the best human players can comprehend.

Imagine an AI brain that is optimized for the purpose of mathematics, or computer programming, science, or engineering. Or at doing AI research... Imagine how much better it could be at these tasks than humans. It could quickly solve problems that would take the greatest human minds generations. It could manage levels of complexity that would drive humans crazy.

bko 10 hours ago 9 replies      
Overall I am sympathetic to the authors argument that fear of super ai is overblown. But I do take issue with some of his arguments.

> Even if the smartest physicists were 1,000 times smarter than they are now, without a Collider, they will know nothing new.

I'm not a historian but I have read that some scientific discoveries are made through pure logic. Einstein and relativity come to mind as he was not an empiricist. So perhaps there is some hope that ai can lead to scientific discoveries without experimentation

>So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way above us, as we are above an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why cant we be at the maximum? Or maybe the limits are only a short distance away from us? Why do we believe that intelligence is something that can continue to expand forever?

The idea that humans could, just by chance, be pushing the limits of intelligence strikes me as silly

nohat 9 hours ago 1 reply      
Better quality than most such posts, but still seems to be missing the point. The remarkable thing about Bostrom's book is how well it anticipated the objections and responded to them, yet no one seems to bother refuting his analysis, they just repeat the same objections. I actually agree with a decent bit of what he says on these points, though his application of these observations is kinda baffling. He makes a lot of misguided claims and implications about what proponents believe. I'll sloppily summarize some objections to his points.

1. This doesn't really bother making an argument against superhuman intelligence. Yes, of course intelligence has many components (depending on how you measure it), but that's not an argument against superhuman intelligence. I'm reminded of the joke paper claiming machines can never surpass human largeness, because what does largeness even mean? Why it could mean height or weight, a combination of features, or even something more abstract, so how can you possibly say a machine is larger than a human?

2. Mainly arguing about the definition of 'general' without even trying to consider what the actual usage by Bostrom et al is (this was in the introduction or first chapter if I recall correctly). I agree that the different modes of thought that AI will likely make possible will probably be very useful and powerful, but that's an argument for superhuman ai.

3. Well he makes his first real claim, and it's a strong one: "the only way to get a very human-like thought process is to run the computation on very human-like wet tissue." He doesn't really explore this, or address the interesting technical questions about limits of computational strata, algorithm efficiency, human biological limitation, etc.

4. Few if any think intelligence is likely to be unbounded. Why are these arguments always 'x not infinite, therefore x already at the maximum?' He also seems to be creating counter examples to himself here.

5. Lots of strong, completely unbacked claims about impossibilities here. Some number of these may be true, but I doubt we have already extracted anything near the maximum possible inference about the physical world from the available data, which is basically what his claims boil down to.

mcguire 5 hours ago 0 replies      
Not a particularly well written article, but he has a few good ideas. Here's a couple of important paragraphs:

"I asked a lot of AI experts for evidence that intelligence performance is on an exponential gain, but all agreed we dont have metrics for intelligence, and besides, it wasnt working that way. When I asked Ray Kurzweil, the exponential wizard himself, where the evidence for exponential AI was, he wrote to me that AI does not increase explosively but rather by levels. He said: It takes an exponential improvement both in computation and algorithmic complexity to add each additional level to the hierarchy. So we can expect to add levels linearly because it requires exponentially more complexity to add each additional layer, and we are indeed making exponential progress in our ability to do this. We are not that many levels away from being comparable to what the neocortex can do, so my 2029 date continues to look comfortable to me.

"What Ray seems to be saying is that it is not that the power of artificial intelligence is exploding exponentially, but that the effort to produce it is exploding exponentially, while the output is merely raising a level at a time. This is almost the opposite of the assumption that intelligence is exploding. This could change at some time in the future, but artificial intelligence is clearly not increasing exponentially now."

The last bit about requiring experiments in real time is also interesting.

RcouF1uZ4gsC 7 hours ago 2 replies      
One of the big issues with people that talk about controlling super human intelligence, is that any talk of controlling it is fantasy. We cannot control actual human intelligence for good. What makes us think we could control super human intelligence?
faragon 9 hours ago 0 replies      
The article is wrong, in my opinion.

Regarding point #1, still not being formally wrong, world computing capability is growing at exponential rate. Not even the end of the Moore's law will stop that, e.g. 3D transistor stacking, strong semiconductor demand for consumer and industrial market, etc. Aso, the author don't know if there is already CPU capacity for matching human intelligence: may be the key missing is not the hardware, but software (efficient algorithms for "human" intelligence running on silicon).

Point #2 is clearly wrong. Demostration: I, for one, if still alive, and having the chance, will try to implement general purpose intelligence, "like our own". And, come on, I know no hacker able to resist that.

Again, point #3 is wrong, unless you believe we're smart because a religious "soul".

Point #4 is a void argument: the Universe itself is finite.

Point #5 is right: a superintelligence may, or may not, care at all about our problems. In the same level you don't have the guarantee of a human government caring about you (e.g. totalitarian regime).

danm07 2 hours ago 0 replies      
I didn't read the whole article... of what I did read, I didn't find it convincing. Few things:

AI doesn't need to exceed humans in every dimension to become a threat. Just sufficient dimensions.

Humanity is basically a bacteria colony in a petridish with I/O. Disrupt infrastructure, and you disrupt input leading to changes in the size of the colony. And mind you, much of our infrastructure resides in the cloud.

Of course, It will be a while before this even becomes an issue, but this is basically how a machine would frame the problem.

Implementation wise, AI doesn't need to be general. At its most inelegant (and not too distant) design, ML can be configured as a fractal of specific of algorithms, with one on top with the task of designating goals and tasks, and subordinates spawning off generations and evaluating performance.

Andy Grove had a good saying, "anything can be done will be done"

Autonomous AI, if it does not break the laws of physics, will exist. It's development will be spurred by our curiosity or profit.

bradfordarner 9 hours ago 0 replies      
Interesting article from an opinion point of view but I find very little real substance behind his arguments.

He is fight the original myth with his own myth except that his myth is founded upon his own assumptions and intuitions as opposed to those of someone else.

It seems more likely that we simply don't know the answer to many of these questions yet because we still have major disagreements around exactly what intelligence is. To use Richard Feyman's famous quote: if we can't yet build it, then we don't understand it.

qsymmachus 9 hours ago 1 reply      
Maciej Ceglowski's takedown of superintelligence is a much better articulation of these arguments, and more (and it's funny): http://idlewords.com/talks/superintelligence.htm
FrozenVoid 1 hour ago 0 replies      
Planes don't have to abide by bird flight laws. There would be some breakthrough from bird-like mimicry of neural networks and algorithms that allow to perform what NNs(our mechanical birds) need days to calculate.Watch for research on how the black boxes of NNs are reverse-engineered and mapped. "Superbird" AI is just discovering that more general laws(flight) exist from bird emulation(bird flight) and applying it to extract direct algorithms that birds(NNs) produce internally(as instinct).
cttet 1 hour ago 0 replies      
The word intelligence is a word from natural language, it is natural for different people to have different interpretation of it.

And this article basically give a redefinition and interpret upon it.

DiThi 10 hours ago 2 replies      
This person doesn't understand the concept of super AI. Of course intelligence is not one dimensional. But the current limit in pretty much all of those dimensions is physical: It's the larger amount of neurons and connections we could fit in the smallest space that can pass through the pelvis while still feeding enough energy to the brain.

You can imagine this as a bunch of people that speaks with each other. The faster they can communicate ideas with each other, the more potentially intelligent the group can be. Machines can surpass the speed of this collective intelligence by orders of magnitude, even if everything else is exactly as a human. This is exactly the reason we evolved to have so many brain resources for language.

sebringj 6 hours ago 0 replies      
I was at the park the other day with my sons, I noticed some other kids on the swings, 2 kids turned and locked legs then a 3rd sat on their joining legs like a huge human-made swing. The point being is I never thought of doing that before with my friends when I was a kid. An AI will be able to think of things we never tried because there are so many more things that we haven't. Speculating on the short end of this seems laughable to me, like someone from the 1800's talking about balloon travel in the 2000s, basing our limited understanding of possibility on our current limitations.
SubiculumCode 6 hours ago 1 reply      
I dont think I worry about general AI so much as a having specialized AI in almost every area, including collections of algos that recognize the task at hand and select which specialized AI to engage, as well as other collections of specialized AI algos that select which selector algos to use based on longer term goals, etc. That is what makes me afraid.
IAmGraydon 5 hours ago 1 reply      
Perhaps my self preservation instinct is completely broken, but why are people so afraid of the possibility that the human race could be replaced by hyperintelligent machines? We aren't perfect (quite the opposite), and a brain that works in the way that ours does has severe built-in limitations. Perhaps the greatest achievement the human race could ever obtain is to create something greater than ourselves. Something that can carry on learning and understanding the universe around us in ways that no human mind ever could.
js8 9 hours ago 7 replies      
I am not believer in superintelligence, but for a different reason than author. I assume the following about superintelligence:

- It somehow needs to be distributed, that is, composed of smaller computing parts, because there is a physical limit what you can do in unit of space.

- It needs to change to adapt to environment (learn), and so all the parts need to potentially change.

From this follows that the parts will be subject to evolution, even if they don't reproduce. And so the existence of the parts will depend on their survival. This, in my opinion, inevitably leads to evolution of parts that are "interested" in their own survival, at the expense of the "superintelligent" whole. And it leads to conflict, which can eventually eat up all the improvements in the intelligence.

Look at humans. Humanity (or biosphere in general) didn't become a superintelligent whole, capable of following some single unified goal. Instead, we became fighting factions of different units, and most of the actual intelligence is spent on arms races.

Anyhow, even if superintelligence is possible, I believe the problem of friendly AGI has a simple solution. We simply need to make sure that the AGI doesn't optimize anything, but instead takes the saying "all things in moderation" to its heart. That means, every once in a while, AGI should stop whatever goals it pursues and reflect on purpose of those goals, if it is not, by some measure, going too far.

You can argue that we don't actually know how to make AI to stop and think. I would respond, AI that cannot do that, and only pursues some pre-programmed optimum mindlessly, is not really general.

GolfJimB 1 hour ago 0 replies      
Not sure about your list of 'some of the smartest people alive today'; makes me think the article was written by someone definitely not nearly on such a list.
psyc 9 hours ago 0 replies      
Ugh, not another AI article by a Wired editor. I skimmed it and saw only strawmen and non-sequiturs.

These issues are mind-bending topics that stretch the imaginations of the most brilliant people I am aware of. It takes them a lifetime to build good intuitions and analogies. I wish that writers of this caliber felt as qualified to write one sentence about it as they actually are.

hyperion2010 8 hours ago 1 reply      
At a certain point it doesn't matter how much smarter you are, the limit on progress is the ability to take action and to make measurements, enough measurements so that you can discern whether a subset of those measurements are biased and in what way. As a result I tend to think that in order to get true super human level intelligences they will need to have super human levels of agency, and that is something that is much harder to build and get us meatbags to support than building a really powerful brain in a jar. Building systems with super human agency also isn't something that happens just by accident.
AndrewKemendo 7 hours ago 0 replies      
Oh boy. Much respect for Kevin Kelly, but I am afraid he missed the mark with his analysis.

Unfortunately he gets hung up on the definition of Intelligence - and not unreasonably so - because it is very ill defined and largely unknown. So all of what he says is true, but orthogonal to the argument he is trying to debunk.

It's basically setting up a pedantic straw man and then taking it apart.

There are other great and more compelling arguments against an all powerful superhuman AGI, unfortunately he doesn't make any of those.

dboreham 5 hours ago 0 replies      
Nice to see some push back but all the chatter in the MSM these days about self-driving cars and AI that's going to replace everyone's job (except, I note : lawyers..) makes me strongly suspect that somewhere there is someone with an agenda, driving this chatter. Someone's bonus depends on the received knowledge being that "AI is a commin' ta getcha..", which it decidedly isn't, imho. Yes it works for facial recognition (sometimes) and deciding whether to reject spam (sometimes), but not for large swathes of the blue and white-collar job world. Long long way off.

Note: obviously there's nothing special about the meat between a human's ears, so _one_ day someone in theory should be able to build a machine that matches and exceeds a human's thinking ability. But that's not going to happen in any of our lifetimes.

mengibar10 5 hours ago 0 replies      
Tools mankind invented so far have been extremely productive as well as destructive. I think worry should not be on if one day the Superhuman AI will take over the mankind, but if we will be able to stop/limit the destruction of that advanced tool in the wrong hands. Definition of who's right or wrong unfortunately contested. We are a species of justifying our actions.

The leverage and exploitation of advanced AI in the hands of malicious people/corporations/states are in a much closer timeline than the "Superhuman AI" could get.

So Open AI kind of initiatives are very important to balance things out. Somehow I am not optimistic.

hcs 2 hours ago 0 replies      
Irrelevant anecdote:I first saw that radial evolution chart while wandering the UT Austin campus, I think it was in a lobby, though I remember it being dominated by bacteria. Interesting to think that might have been Hillis's lab.
macsj200 10 hours ago 2 replies      
"Humans do not have general purpose minds, and neither will AIs."

The author must not have met many humans.

blazespin 24 minutes ago 0 replies      
AI is undergoing exponential growth. As it gets better it becomes more profitable and feeds investments into itself. It may not make everyone redundant, but it will make most.
ooku 6 hours ago 0 replies      
The author should read this: https://arxiv.org/abs/1703.10987
hasenj 56 minutes ago 0 replies      
It seems like the only thing this author can offer is playing on words to make things seem obscure, blurry and unclear.

His central argument seems to be that intelligence is not a thing, and although he doesn't say it directly, but I think he doesn't believe in IQ.

He's committing the same kind of fallacy committed by certain radical ideologues, that basically says something along the lines of: since you cannot define something accurately 100% then any statement about the thing is equally invalid.

We don't have to engage in this kind of meaningless argument about semantics.

There are clear and easy to understand examples of scenarios where super AIs can cause harm to human societies that speakers like Sam Harris have articulated pretty well.

akyu 8 hours ago 0 replies      
I completely agree with the author. Hiding my head in the sand and plugging my ears will completely avoid the AI apocalypse.
ptr_void 9 hours ago 0 replies      
The article is good first step, the second step would be to pick up an introduction to philosophy of mind book and realize the enormous number of issues one has to resolve, and methods needs discovering before getting close to answering such questions as whether AGI is possible.
stanfordkid 9 hours ago 0 replies      
I agree with his proposition that there is no linear "better or worse"

That being said there is no evidence that an AI that is fundamentally different (and potentially inferior) from humans could not be much more effective at controlling human behaviors, thoughts, viewpoints or actions.

Furthermore it may be the case that an AI can sense or understand information we cannot simply because we do not have the "sensors" to understand such information. The actual "intelligence" does not need to be very high if the data is that much richer.

From another perspective: the AI may not be as intelligent but may have more control over the environment than humans (e.g controlling the smart grid, traffic routing etc.) because of this its ability to influence human behavior is larger.

Either of these two cases could be deemed as "greater intelligence" ... just intelligence of a different kind. We need to look at intelligence less in terms of human constructs and more in terms of "ability to manipulate human behavior" -- this would be a human centric definition.

wwarner 5 hours ago 0 replies      
refreshing. i think the cost factor isn't brought up enough. none of this is to say that ai isn't going to change the shit out of everything, it's just that the superhuman, "summoning the demon" rhetoric is imprecise, premature and distracting.
macawfish 6 hours ago 0 replies      
It may be a myth, but that doesn't mean people won't manifest powerful images of it.
goatlover 10 hours ago 0 replies      
We already have the equivalent of Superhuman AI in the form of corporations, governments, and society in general. I don't buy the claim that sometime in the future a singular artificial mind will come into existence whose continual improvement will make it smarter with access to more resources than that of Google, the US government, or all of human civilization, with its billions of organic human intelligences being empowered by machines already.

We've already achieved super intelligence. It's us empowered by our organizations and technology.

partycoder 4 hours ago 1 reply      
While one individual might not be have "general purpose intelligence" to the satisfaction of this author (being able to excel at different fields/activities), at the population level, it is fair to say human intelligence is general purpose.

Then, there are aspects that are greatly overlooked in all these narratives:

Human geniuses occur very rarely and take literally decades to learn, while the AI equivalent could be consistently "instanced" multiple times, live forever, evolve after birth and work 24/7 without sleep.

Then, humans have crappy I/O. AI is not bounded by the shortcomings of writing/reading/typing/talking at low rates of words per minute...

Generally speaking, AI has theoretically a substantial advantage over humans. Even if AI remains dumber for a time, these advantages are enough to make them prevail.

rojobuffalo 10 hours ago 2 replies      
It's hard to tell where this author is coming from. The three main assumptions you have to make for AGI are (via Sam Harris):

1. Intelligence is information processing.

2. We will continue to improve our intelligent machines.

3. We are not near the peak of intelligence.

The author's first counterpoint is:

> Intelligence is not a single dimension, so smarter than humans is a meaningless concept.

Intelligence is information processing so "smarter than humans" just means better information processing: higher rate, volume, and quality of input and output. Aren't some humans smarter than others? And isn't that a power that can be abused or used for good? We don't have to worry about it being like us and smarter; it just has to be smart enough to outsmart any human.

He then talks about generality like it's a structural component that no one has been able to locate. It's a property and just means transferrable learning across domains. We're so young in our understanding of our own intelligence architecture, it's ridiculous to build a claim around there being no chance of implementing generality.

This statement is also incredibly weak:

> There is no other physical dimension in the universe that is infinite, as far as science knows so far...There is finite space and time.

There is evidence that matter might be able to be created out of nothing which would mean space can go on forever. We might only be able to interact with finite space, but that isn't to say all of nature is constrained to finite dimensions.

Even still, he doesn't make sense of why we need infinite domains. You only need to reach a point where a programmer AI is marginally better at programming AIs than any human or team of humans. Then we would no longer be in the pilot's seat.

EGreg 5 hours ago 1 reply      
I think this whole thing misses the point.

The main difference between these machines and biology is that, once an improvement is discovered, it can be downloaded very quickly and cheaply onto all the machines.

Copying is perfect and can be checksummed. Unlike learning in a university, say.

This is also what enables things like deep learning across all the world's medical data for Watson. A doctor somewhere can't know all the news everywhere and discover statistics and patterns on command. While Watson not only an ingest all this info but upload the results to all the places.

This ability to perfectly replicate a program also makes the "self preservation" aspect and the "identity" aspect of computers different than that of biological organisms. What is identity, after all, if a program can be replicated in many places at once?

parenthephobia 9 hours ago 1 reply      
> The assumptions behind a superhuman intelligence arising soon are:> ...> 4. Intelligence can be expanded without limit.

The only assumption required is that intelligence can be expanded just beyond human limits, which I think is a much less controversial claim.

logicallee 6 hours ago 0 replies      
You have heard this talk from faraway lands about a new type of machine that can supposedly do more than a person with a stick and something to put it against as a lever. The Watt steam engine, some people call it. If thousands of years before Our Lord, humans could roll on logs, pull on pulleys, or push on the inclined plane vast blocks of stone culminating in a Pyramid to their pretended gods, the argument goes, what is to keep someone from making a device more powerful than a man?

I am here to tell you that such lunacy rests on seven wrong misconceptions. While I will freely grant that perhaps it is possible to apply a lever, yet it is human power and human power alone that moves that lever. The idea that anything but a human could do work is absurd on its face. Nobody will ever get from one town to another except on foot, or perhaps on a horse. To allow the idea that a machine could do this or any other task is as deranged as suggesting that machines will fly like birds across continents, carrying people, or that one day men will simply climb up and into the atmosphere and go and land and walk upon the moon. It is clear from first principles that raising or moving anything takes work and power: it is just as clear that nobody but man shall ever provide that power, let alone any more.

I do not have time to rewrite the above: substitute a hundred billion neurons doing chemical reactions, and add that it is clear computers can never do either the same or even less so, any more, and you will see how completely wrong the author is in every way.

Nobody but a man can ever do work, and nothing but a hundred billion neurons can or will ever think.

aaroninsf 7 hours ago 0 replies      
A one sentence rebuttal to this is that the exponential take-off of human civilization ~= accelerating distributed intelligence.

You can quibble about what an AI is; if you draw a box around human civilization and observe its leverage and rate of change, well, the evidence is that we are riding the superhuman takeoff.

Analemma_ 10 hours ago 3 replies      
When you want to discuss "the myth of a superhuman AI", it's important to carefully separate the two categories of claims:

1. The claims by economists that AI-- even if it's not "strong AI"-- will put lots of people out of a job with potentially severe societal/economic repercussions

2. The claims by the Bostrom/Yudkowsky/etc. crowd that an AI intelligence explosion will cause the extinction of humanity

Without saying anything about the plausibility or lack thereof of either 1 or 2, I think we can all agree that they are very different claims and need to be analyzed separately. Right from the very first sentence the author seems to muddle the two, so I don't think there's much of cogent analysis in here.

bhouston 10 hours ago 0 replies      
Super intelligence is a cargo cult for some, you know who I am talking about, but it doesn't mean it won't happen to some degree.
darawk 5 hours ago 0 replies      
Why do people like Kevin Kelly? Everything i've ever seen him say is consistently misinformed and poorly thought out. I tried to read one of his books recently, because I heard a number of people recommend it, and I couldn't even finish it (highly unusual for me).

Basically every point he makes in this post is just fundamentally wrong in one way or another. He clearly has no understanding whatsoever of what he's talking about, on the technical, biological, or psychological sides. He's just saying things that seem true to him, with zero context or understanding of any of the issues involved.

> Intelligence is not a single dimension, so smarter than humans is a meaningless concept.

Multi-dimensional vectors have magnitudes just like scalars do. When will people get over this whole "intelligence is not one thing, therefore you can't say anything at all about it" nonsense?

> Humans do not have general purpose minds, and neither will AIs.

False absolutism. Human minds are certainly more general purpose than an AI. When an AI has a mind that is more general purpose than ours, I think its fair to call it a general purpose AI.

> Emulation of human thinking in other media will be constrained by cost.

According to who? The only person that could answer that would be someone who already knew how to emulate the human brain. Although, come to think of it, some 50% of the human population are able to create new brains, at quite little cost. So it is empirically possible to synthesize new brains extremely cheaply.

> Dimensions of intelligence are not infinite.

Lol, according to who? What does this even mean?

> Intelligences are only one factor in progress

Sure. So what?

There are plenty of perfectly legitimate, well thought out, informed critiques of AI fear mongering. This, however, is not one of them. This is garbage.

felipemnoa 5 hours ago 0 replies      
I think It is OK to point out obvious errors in your approach when trying to create something new. But in this post all I can read is that you cannot create superhuman AI just because he thinks it is not possible. I don't think I read any real arguments.

All he is doing is trying to convince us all that it is not possible to create AI.

Hopefully nobody is convinced by this post not to try to create a superhuman AI. Most of us will fail but at least one will succeed. I don't think it is any exaggeration to say that this will probably be our last great invention, for good or for bad. Of course, I may just be biased given my own interests in AI.[1]

[1] https://news.ycombinator.com/item?id=14057043

Ffffound is shutting down ffffound.com
173 points by nikolasavic  13 hours ago   84 comments top 23
SwellJoe 10 hours ago 2 replies      
My only exposure to Ffffound was when I met someone at a party in Silicon Valley about ten years ago who was working on a clone of Ffffound ("but different"). I'd never heard of Ffffound, and he kept saying it with all of the "f"s sounded out like he was stuttering. It was hilarious on a couple of counts. For one, he was working on a clone of something that was so small at the time that I'd never heard of it, and I was living in the valley and kinda staying on top of startup news; something with no known business model, no big investment, etc. no evidence that it would go anywhere. And, for the other, it just sounded silly to sound out the name every time he said it.

I looked up the site, probably the next day, and couldn't really figure out what it was for, so never visited again. I'm obviously not the target market, but that's one of the funnier memories I have of Silicon Valley and its culture.

Also, I'm a little surprised it's lasted this long. I didn't expect it to, given my impression of it at the time. Good for them.

accountyaccount 3 hours ago 0 replies      
I've been visiting this weekly for 10 years.

It's easy to take for granted now, but ffffound was before tumblr and really before Facebook was a household name. It predates the current use of the word meme.

It's very much a bastion of the 2000's internet. In terms of long-term personal internet use ffffound for me is only second to boingboing (which I feel is nearing an end as well).

mcphage 13 hours ago 5 replies      
Weird, this was a site that I used to hit pretty regularly, and then it seems that one day I just forgot it existed? And so now linking to it I remember having gone to it, but I don't remember what it is.
jack_jennings 9 hours ago 1 reply      
Mentioned this as a reply, but perhaps worth posting again: check out https://are.na if you are one of the folks yearning for a similar thing (that isn't pinterest). Arena is certainly a tool for a certain niche, but it has a great API (some people have used it as a CMS using the API) and you can create "channels" of content (images, text, URLs) that can be nested/associated within other "channels".
yan 12 hours ago 1 reply      
Man, talk about the ephemeral internet: One of my first projects[1] in Haskell was a small tool to go through my Google Reader favorites and download posts I tagged on ffffound.

[1] https://github.com/yan/hhhhoard

btym 12 hours ago 4 replies      
What an abrupt end. And their robots.txt[1] never allowed the Internet Archive to crawl them, so nothing will be archived.

[1]: http://ffffound.com/robots.txt

huac 12 hours ago 1 reply      
10 years, and I could never get an invite. RIP.
alkoumpa 8 hours ago 0 replies      
If they are shutting down, why not make (and share) an archive of the whole site, to preserve the works that's been done? Sharing through bittorrent is free and decentralized. They already host the images, and I get the feeling that sites in this niche are already in some gray-copyright area..
franze 13 hours ago 3 replies      
here is the google trends graph https://trends.google.com/trends/explore?date=all&q=FFFFOUND wonder what happened in july 2013?

the decline in 2015 might be the push by google for mobile friendly sites.

_eht 13 hours ago 1 reply      
Well that was vague. I can't help but wonder why. Is it because they have exclusively worked with The Deck for advertising, or did they just get bored? Surely they could find some similarly minimal way to advertise in place of The Deck.
voidz 13 hours ago 1 reply      
Well, I've never heard of this site before, but so long and thanks for all the fish, I guess. Looks like I missed a pretty nice website.
upbeatlinux 12 hours ago 0 replies      
Sad. I remember, at least initially, Google Gears having a tough time trying to parse blog content linked from Ffffound. I'd download all my Reader content for plane rides only to have missing images. Brings back memories of a better time; before Yahoo killed Delicious and Flickr.
kveykva 3 hours ago 0 replies      
Main thing about ffffound I liked was that using h/j/k/l to skim through the site actually worked.
AdrianRossouw 9 hours ago 0 replies      
joemi 8 hours ago 0 replies      
I always had a really big problem with the site: so little attribution (proper or otherwise) of images.
stevefeinstein 9 hours ago 0 replies      
Before 20 seconds ago, I did not know this was a thing.
ic4l 12 hours ago 1 reply      
Image not FFFFOUND
sogen 11 hours ago 1 reply      
This is Colossal [0] is a good alternative.

Text heavy.


teddyknox 5 hours ago 0 replies      
Ello.co stealing their metaphorical lunch
maerF0x0 13 hours ago 3 replies      
Blocked on my work's network? Whats on here?
justinzollars 12 hours ago 0 replies      
sad, that's one of my brothers (who is a designer) favorite sites
drax_ 6 hours ago 0 replies      
jjjjound.com the OG design inspiration site is still up.
bebop22 13 hours ago 0 replies      
Show HN: Beautiful notes app that saves directly to GitHub kobble.io
153 points by kobble  14 hours ago   119 comments top 23
coalaber 13 hours ago 3 replies      
This feels excessively complicated to me. what's a track? what's a channel? why do I need filenames for everything? why does it take 3 minutes for a new user to write a single line of 'note' ?

a beautiful notes app (to me) would be one where you start with writing a note

abrussak 8 hours ago 2 replies      
I think I'm able to view all other gists.

If I open the Gist on https://gist.github.com/ then go to the original gist at 'kobble-git/channel-groups.json' I can navigate to all the forks and see data for other users.

matt4077 13 hours ago 7 replies      
> Kobble is the only app available where you own all of your data.

What? I find that highly unlikely. In what way do I not "own" the texts I write infor exampleApple's Notes.app?

And what is even meant by "own"? Even for something like Facebook, I'm pretty sure that I still own the copyright for texts I write in their app.

lol768 13 hours ago 1 reply      
Interesting tool, I like that it works with GitHub and the repo support will be cool to see.

The design is neat (I generally like dark themes) although the fading animations got a bit annoying after a few minutes using it. Some of the dialogs could do with supporting common shortcuts (e.g. enter to create a new track, escape to close the dialogs). Sometimes I managed to click outside of the presentation view without realising, and then left/right arrow keys stopped working. Sometimes I wasn't sure where to find certain functions (e.g. I looked for "Edit" on a presentation item on the context menu, but then realised I had to click it and then click the pencil icon). I couldn't get the Share functionality to work, and wasn't sure why it was in its own context menu with no other items.

I'm not sure I understand the purpose of the tracks/channels distinction. I can understand creating a project to store presentations (items) in, but why is there another level in Kobble?

insomniacity 13 hours ago 4 replies      
What does this have over Standard Notes?


Namrog84 13 hours ago 1 reply      
30 second impression:

The landing page has side arrows? Is this meant to be a Powerpoint slides deck type app?

Why isn't it just a 1 page vertical scroll to showcase the 'notes app'? Or do all the pages in notes have to be a click page arrow?

edit: I looked around a little more (never logged in) but all I ever saw was the slides. I am still not sure if that's the only mode, or is there another preview showing other modes/styles of notes? As boring as it may be, I'd love to see some generic notes examples: grocery list, todo, programmer's notes, school notes, lorem ipsums, or anything more representative of a real world notes and less of just listed features.

peterburkimsher 4 hours ago 0 replies      
I like the idea of saving directly to Github. I'm using Github API for dictionary changes in my Chinese text translator that I'll release soon.

The user interface is, indeed, beautiful. But that's not enough to make me use it.

Notes on my iPhone can't be stored in a hierarchy of subfolders. In order to do that, I made my own notes app in PHP, but it's pretty awful. I plan to port it to JS+Github, but I have other priorities right now.

Is it possible to only commit changes to a file using git, instead of re-uploading the whole file? I wish there was an easy way to append to a file without a download-append-upload process. Please teach me if you know how to do this in git!

danellis 13 hours ago 2 replies      
Curious as to what makes this "beautiful".
jaquers 9 hours ago 1 reply      
Having read your explanation of channels/tracks - I get it, you have to serialize your storage into something and it's neat that it's representable through url. I was a little confused at first. I agree with others that the organization seems a little bit too forceful or opinionated for the primary use case of a "notes app" which is jotting down a quick note.

Maybe you could have an "unsorted" channel by default, and then split out the context dropdown into little icons? Simplify the work flow to be: 1. Click "new markdown" button, 2. Write some markdown. 3. Save.

If you defer asking the user to name a file up front, you can auto generate a filename based on the contents of the document (like parse 1st line for # My Title -> my-title.md). The only sacrifice you pay is making them click "save" the first time; but IMO that is a lot less friction than having to name documents before I even start writing. Plus even if your parsing fails - you're no worse off than where you started by asking the user for a filename.

ronilan 13 hours ago 2 replies      
Shameless plug time again...

* My summer project (2016):


* The pitch goes something like this:

<xstatic>| Docs is a simple, fast and easy to use, web based document editor.

It generates static (but editable) HTML files that can than be easily shared and printed. Like Google Docs inside an AWS S3 bucket.

* More info in this document:


thanatropism 6 hours ago 0 replies      
So I have (and bought!) not one but two plaintext editors for iOS that save directly to Dropbox or iCloud: iWrite Pro and 1Write.

I have both apps because iWrite is my reach-for text box while 1Write opens to a Dropbox folder that has a semi-organized idea box (a "personal wiki" of sorts, but informally so).

Why do I want this instead? Versioning? I've thought of moving my ersatz "personal wiki" to Github for that, but I don't wish to make it visible to the world.

passivepinetree 12 hours ago 0 replies      
> Kobble is the only app available where you own all of your data.

If it's hosted on Github's servers, you don't really "own" it. Or at least you don't have sole proprietorship of your data.

philliphaydon 12 hours ago 2 replies      
I registered localnotes.io or something a couple of years ago to make a notes app (just a side list of note files and then a markdown editor) that would persist to local storage and could be synced to private gists and images would store as base64 encoded files under that gist.

I got a proof of concept working but was too lazy to integrate with GitHub and canned it.

Working on something more fun now and learning vue.js.

baby 11 hours ago 1 reply      
Unfortunately I cannot access this because it can read my private gist. I understand why it needs the access though :(
bdickason 6 hours ago 0 replies      
I'm into the idea (I currently use Ia writer saving to a folder in Dropbox). Is there a desktop client or only the web version?
aorth 11 hours ago 2 replies      
Small, non-technical pet peeve: they repeatedly spell GitHub as Github.
philters 11 hours ago 1 reply      
Anyone looking for a nice interface to manage your Gists should take a look at http://www.gistboxapp.com
grogenaut 9 hours ago 0 replies      
Why do you need access to my email address?
lfender6445 13 hours ago 0 replies      
i would love something like this for free bitbucket private repos, is there any planned support? realtime updates would also be great
orschiro 12 hours ago 1 reply      
Can I use Kobble offline?
Eun 13 hours ago 1 reply      
Would be nice, if there is a mobile friendly version.
webwanderings 13 hours ago 0 replies      
How do I use this? Is there a bookmarklet?
rileytg 3 hours ago 0 replies      
Show HN: Gopher Browser for Windows Client jaruzel.com
75 points by Jaruzel  11 hours ago   28 comments top 10
delbel 1 hour ago 0 replies      
I remember Gopher and another one called WAIS. Then one day, somebody told me on irc I needed to download 'Mosaic', and I remember thinking how superior Gopher was at the time and thought this 'www thing' would never catch on because there was so much information on Gopher and WAIS.
phusion 7 hours ago 0 replies      
Oh man, I don't think I have much/any use for this today, but my first internet experience was on Gopher. They had it set up on what I'm assuming were some kind of *nix machines at the local library. I would search for comic book related material and generally "surf" around when my mother would drive me down there. Fortunately the Internet went public in '97, my jr high school had several networked computers, eventually all on the Internet blah blah netscape.
niftich 9 hours ago 1 reply      
Oh, this is you! [1] Grats on release!

The last big thread on Gopher here was pretty entertaining, nostalgic, and informative [2] -- including posts by the creators -- and it's the one where OP posted that they're working on a new windows client [1].

[1] https://news.ycombinator.com/item?id=12274235[2] https://news.ycombinator.com/item?id=12269784

unicornporn 2 hours ago 2 replies      
If you use Firefox, you can get Gopher support in an extension: https://addons.mozilla.org/firefox/addon/overbiteff/

Works great.

pimlottc 9 hours ago 1 reply      
"Gopher Browser for Windows Client" is rather confusing; why not just "Gopher Browser for Windows"?
KirinDave 9 hours ago 1 reply      
I'm curious: what's still in gopherspace these days? Worthwhile things?
fsiefken 8 hours ago 1 reply      
This is great, and it's so fast. Some feature requests:

* map backspace and arrow-left to previous page

* map enter/return to select

* open text files, images and sounds inline

mileycyrusXOXO 9 hours ago 0 replies      
I love exploring gopherspace from time to time. Usually I use Lynx but I may have to give this a try.
topbanana 9 hours ago 1 reply      
Chrome Warning: This file is not commonly downloaded and may be dangerous.

Nice try Google

aninteger 6 hours ago 3 replies      
Is there source code I can audit and compile instead of running some random exe file?
The FCC plan to undo its net neutrality rules washingtonpost.com
236 points by sinak  11 hours ago   147 comments top 20
outsidetheparty 11 hours ago 6 replies      
> Pai argued that rolling back the rules will encourage ISPs to spend more on their broadband networks

How? How on earth does letting ISPs milk more money out of their existing network incentivize those ISPs to expand that network? It's just so frustrating that these guys no longer even bother to pretend that their arguments make any logical sense at all. "We'll protect user privacy by eliminating these rules that protect user privacy!" "We'll boost competition and choice, by giving the existing entrenched players more control and power!"

This is just such a naked demonstration of regulatory capture. It boggles my mind.

ficho 9 hours ago 3 replies      
"Two years ago, I warned that we were making a serious mistake, Pai said. Its basic economics: The more heavily you regulate something, the less of it youre likely to get.

Just reading this makes me angry. Has he ever heard of monopolies / oligopolies, or simply the place of government in regulating public utilities.

Apply his quote to water supplies and see what happens.

tzs 10 hours ago 1 reply      
> Pai's proposal is set for a vote at the FCC's May 18 open meeting. If it is approved, Pai will begin seeking public feedback on the plan, which calls for regulating ISPs more lightly and asks Americans for ways to preserve the core principles of net neutrality, such as the idea that blocking or slowing traffic should be off-limits.

Well, the obvious way to "preserve the core principles of net neutrality, such as the idea that blocking or slowing traffic should be off-limits" is to not get rid of the regulation that prohibits blocking or slowing traffic.

sinak 11 hours ago 2 replies      
Here is the "fact sheet" that Ajit Pai distributed at the event at the Newseum today: https://twitter.com/davidshepardson/status/85728851922775655...

Also of note, YC, TechStars and Engine Advocacy released their "Startups for Net Neutrality" letter to Pai this morning: http://www.engine.is/startups-for-net-neutrality

joezydeco 8 hours ago 0 replies      
Wouldn't it be fun if Netflix, Hulu, HBO, and a few other streamers all decided to throttle the Washington DC area down to, say, 10% of normal outbound bandwidth for a long weekend?

Maybe it wouldn't touch Pai, but if it pissed off a huge block of his neighbors maybe he'd start to understand the idea.

js2 11 hours ago 1 reply      
I just setup monthly donations to the EFF (I'm ashamed I didn't do so a long time ago), and you should too. Also don't forget to check if your employer will match your donations.


hplust 11 hours ago 1 reply      
Here is the EFF link [0] that provides additional information and also provides methods of contacting your local government officials to take action. Please send this along!

[0] https://www.eff.org/deeplinks/2017/04/fcc-wants-eliminate-ne...

sxates 9 hours ago 1 reply      
I'm paying 30% more each month to use a local ISP that's dedicated to Net Neutrality and online privacy (sonic.net) instead of Comcast/xfinity, because Comcast, along with the other large telecom companies, are actively undermining competitive markets, consumer freedom, and in a sense, our democracy.

An extra $20-30/month is a small price to pay to keep what little competition they have alive. I'd encourage everyone else to examine their choices here if they have any (which I realize for many is 'bad' or 'bad').

doctorshady 7 hours ago 2 replies      
So, I'm just going to ask outright; what would it take to get a politician like Pai investigated? I'd find it very hard to believe at this point that he doesn't have his hands stuck in exactly the sort of pies (no pun intended) that could get him in trouble.
cwisecarver 7 hours ago 0 replies      
I live in a rural area in the mid-atlantic US. My choices for internet are 10mbit DSL from CenturyLink or LTE (with bandwidth caps). I've only lived here for a year but my previous two homes have had at least two options for 100mbit+ internet, FiOS and Comcast, and LTE.

Comcast serves across the street from my house and told me it would be $76k out of my pocket to get service here. For < 1mi run of fiber.

I'm 100% in favor of making ISPs common carriers and making it unlawful for them to track their customers' habits. I've worked for two ISPs in my career. They should be happy with the business of providing internet to customers and compete on delivering better, faster, and more reliable service not on selling their customers' information or eyeballs to the highest bidder.

cakeface 10 hours ago 7 replies      
The sneakiest thing just happened to me on that Washington Post site. There was an overlay ad, no big deal, with an X to close it in the upper right of the screen. I moved my mouse up there to click the X and the site shifted slightly so what I actually clicked was "Subscribe"!
ShameSpear 11 hours ago 3 replies      
I just feel hopeless. I don't think there's anything we can do at this point
msoad 9 hours ago 0 replies      
Like many other republican policies it's going to hurt the poor more than others. I would probably be fine paying extra to access "all of internet" but for a low income person it's going to be devastating. They will lose access to many useful resources due to this policy.
digitalmaster 9 hours ago 0 replies      
The power and influence of monopolies are a direct threat to democracy. This is just another example.
notadoc 11 hours ago 5 replies      
How, specifically, is this helpful to the consumer or to the internet in general?
pvnick 6 hours ago 1 reply      
This is what happens when bureaucrats in the administrative state determine rules rather than congress.
rdxm 9 hours ago 1 reply      
would be really interrsting to do a deep dive on Pai's personal finances. He's not even trying to hide how much he's in the bag for the telecom/ISPs.
emehrkay 6 hours ago 0 replies      
Hacker News'ers, why did you vote for republicans knowing they'd do something like this?
JustSomeNobody 11 hours ago 0 replies      
Woot! This will CREATE JOBS!

Pai is disgusting.

kakarot 8 hours ago 0 replies      
We're all getting very tired. If this doesn't pass today, it will pass tomorrow. The slow entrenchment of government over our freedoms and rights as international sovereign beings seems relentless. If we cannot push for high-order legislation outlining very clear rules for our digital rights soon, we won't have any rights left to claim. And then we can only take them by force. I would prefer to avoid that.
MG4J A free full-text search engine for large document collections unimi.it
84 points by luu  14 hours ago   13 comments top 5
verytrivial 14 hours ago 1 reply      
That name sound very familiar, as does the feature set. Managing Gigabytes[1], or "mg" was the output of a University of Melbourne and RMIT research in the 1990s. It went on to be commercialized as SIM and later TeraText[2] and has largely disappeared into the government intelligence indexing and consulting-heavy systems space (where it is now presumably being trounced by Palantir).

[1] https://www.amazon.com/Managing-Gigabytes-Compressing-Indexi... - Note review from Peter Norvig!

[2] http://www.teratext.com/

dumbfounder 13 hours ago 0 replies      
Blast from the past! Distributed is a bit of a stretch, I think you need to coordinate all of that yourself. It is no more distributed than Lucene (I think).

Their fastutil stuff is pretty interesting though for creating highly optimized algorithms. Lot's of primitive based data structures that are fast and memory efficient.

styfle 14 hours ago 2 replies      
How does this compare to Elasticsearch or Solr?
woliveirajr 12 hours ago 0 replies      
Some links are broken inside the unimi.it
bawllz 9 hours ago 2 replies      
how is this on the first page of hackernews?
Backdoor in the firmware of Antminer Bitcoin mining hardware antbleed.com
198 points by twowo  9 hours ago   91 comments top 16
tedivm 8 hours ago 2 replies      
The hardware checks in with central bitmain servers to see if the hardware is "legitimate". The bitmain servers have to explicitly return false to disable the machines, which is really important because it means that the servers just being disabled (via a DDoS for example) will not shut down anyones systems.

That's why redirecting the traffic from "auth.minerlink.com" to point at "" is an effective way to bypass the issue. The server (localhost) isn't responding with false and thus the system stays up and running.

The idea that all machines would be shutdown globally seems a bit excessive. While possible it would require Bitmain to lose control of their domain globally, and I imagine an issue like that would get resolved fairly quickly.

That being said it is a bit stupid of bitmain to be doing this, especially if they aren't even doing it over SSL.

haakon 9 hours ago 2 replies      
Someone noticed it in September, opened an issue, and got no response. https://github.com/bitmaintech/bmminer/issues/7

It also looks like the backdoor may have a remote code execution vulnerability: https://twitter.com/petertoddbtc/status/857340167400587264

tyingq 8 hours ago 1 reply      
The Reddit thread on this seems somewhat evenly divided on whether this is a real issue: https://www.reddit.com/r/btc/comments/67qzsn/antbleed_exposi...

Surprising, as it seems like a straightforward, real issue.

derimagia 1 hour ago 0 replies      
As usual please make sure you link to github links directly.


If the file is edited that link is useless. Even worse, removed or moved.

Press "y" and you get: https://github.com/bitmaintech/bmminer/blob/b5de92908498590d...

runeks 2 hours ago 1 reply      
Can someone explain why Bitmain controls such a large share of the global hashing power? Why can't competitors produce Bitcoin mining chips as good as Bitmain's? Is the case simply that Bitmain were first with 16nm-based miners, and when everyone else get there too, Bitmain will have no advantage left?
twowowo 9 hours ago 1 reply      
Bitmain is not only the producer of those mining ASICs, it also controls a huge share of the mining power itself.

If it really can kill a large fraction of the remaining hash power it is quite likely they would control over 50% themselves.

That is scary! Especially as they are known to act maliciously in other situations and are opposing the remaining part of the community in the Segwit vs Bitcoin Unlimited debate.

webninja 9 hours ago 0 replies      
It appears that Antbleed is a proposed temporary denial of service attack.

Even without Bitmain being malicious, the API is unauthenticated and would allow any MITM, DNS or domain hijack to shutdown Antminers globally. Additionally the domain in question DNS is hosted by Cloudflare making it trivially subjected to government orders and state control.

olegkikin 9 hours ago 7 replies      
Why would Bitmain shut down their customers' hardware? It's a sure way to kill their (quite successful) brand.

But even if it happens, all you have to do is update the firmware again, and keep mining.

But let's imagine someone actually does shut down 70% of the mining power. What's the real consequence? Blocks will be mined somewhat slower (but not even close to 3 times slower) till the difficulty self-corrects. In the very worst case that will take 2016 blocks, or 14 days.

plart 8 hours ago 2 replies      
Redirecting auth.minerlink.com to point at is not a permanent way to bypass the issue.

If the next patch includes a localhost server that can proxy communication to auth.minerlink.com, the issue returns.

If the next patch updates the url to auth2.minerlink.com (or any other domain), the issue returns.

If the next patch flips from default-allow to default-deny, all customers will be out of luck until they patch and come back to the central control mechanism.

If the next patch implements some form of authentication such that you can't easily spoof a "True", all customers will be out of luck.

If the remote code execution is used to patch code without the administrator's knowledge/permission (haakon has a link saying remote code execution is possible) any/all of the above are trivially easy to take advantage of.

Lastly - my understanding of bitcoin is a little fuzzy, but I believe with 50% of the computing power, you can rewrite transactions as you please, capturing as many bitcoin as you want. If you had a plan to not centralize those accounts, it would be extremely difficult to sort out.

fpgaminer 8 hours ago 4 replies      
I wonder if YC would fund a new Bitcoin mining ASIC startup. I often regret not being able to pivot my previous company from FPGAs into ASICs :/
ReligiousFlames 7 hours ago 0 replies      
"Sell them coin-operated shovels that we can remotely disable". Until they get caught. Oops.
tyingq 9 hours ago 1 reply      
Looks like they also took down auth.minerlink.com ... at least it doesn't resolve to anything for me.
twexler 9 hours ago 3 replies      
I'm not sure what's worse about this:

1. The fact that it exists

2. The fact that they're using "something" bleed as the name (creativity, please)

3. That whoever created this page recommends the user alter the miner to point to some other, user-controlled HTTP server, effectively MITMing anyone who sees this page.


lossolo 9 hours ago 1 reply      
You would need to be MiTM to exploit that. This "backdoor" will have probably almost no effect, it's interesting that someone made special site just for it..
DonbunEf7 9 hours ago 1 reply      
As usual, this is a strong lesson for those who haven't considered capability-safe designs. A big pile of C carrying many libc calls is pretty hard to audit!
gnu8 8 hours ago 1 reply      

 Standard inbound firewall rules will not protect against this because the Antminer makes outbound connections.
What kind of idiot doesn't have outbound firewall rules, particularly on their production mining network?

Animated Bzier Curves (2010) jasondavies.com
81 points by arm  13 hours ago   8 comments top 3
onuralp 11 hours ago 0 replies      
A Primer on Bzier Curves (with interactive animation) - https://news.ycombinator.com/item?id=14191577
csense 12 hours ago 4 replies      
Bezier curves are usually defined as polynomials. It would be interesting to see some algebraic derivation showing how the polynomial form of the curve follows from the construction in the visualization.
btkramer9 12 hours ago 1 reply      
I feel like there are some really neat insights and visuals the can be made combining this with the fourier transform but I can't quite pin it down.
Places to Post Your Startup or Product breue.com
142 points by zvanness  11 hours ago   19 comments top 7
shubhamjain 13 minutes ago 0 replies      
ProTip: Since these forums are predominately internet-savvy, you're likely to hit more people who are testing the waters rather than trying something that solves their problem. Just don't forget to add "pay me" button. It would sound risky but it's worth it. There is nothing worse than building something that no one is willing to pay for. Trimming down early makes sure that you're building a solution that genuinely solves a problem.

At the nascent stages, you don't need to add numbers, only people who are delighted to use your product and are willing to pay for it.

minimaxir 10 hours ago 2 replies      
Since this is on front page, here's a little rant about spamming sites with your product:


Yes, discoverability is broken. But that doesn't justify spamming, or other "growth hacking" that the Product Hunt culture has made socially acceptable (in fact, there has been a rise in clever voting manipulation tactics on Hacker News because such tactics have been normalized on PH: https://twitter.com/jiyinyiyong/status/855661997169364993).

Spamming won't magically make you go viral. And if you don't go viral, that's ok. Improve and try again.

welanes 8 hours ago 1 reply      
Quora is probably one of the better resources on that list because rather that listing your product on the shelf next to 1000 others, it allows you to discover people looking for solutions just like yours.

For example, say you're building an Android weather widget (for some reason). Build a profile where you're an 'Android', 'Widget', 'Weather' expert and Quora will surface this question for you to answer* : https://www.quora.com/Android-Application-Which-is-the-best-...

This targeted approach is much more valuable for your product, your time, and for the person asking the question.

*Of course it helps to actually have the best weather widget.

toni 10 hours ago 1 reply      
Who would benefit from posting their product to 125 websites except spammers and growth hackers?
metalmanac 5 hours ago 1 reply      
How many of those will actually bring in meaningful traffic? It would be nice to see a list from people who have successfully launched a product on multiple platforms and have traffic numbers to backup their suggestions.
uladzislau 7 hours ago 0 replies      
I'd prefer 25 good reputable vetted places - i guarantee some of these sites are not even loading.
kensai 10 hours ago 1 reply      
This begs for an application to automatically post on all (or at least most) of them! :D
How to Avoid Going to Jail Under 18 U.S.C. For Lying to Government Agents findlaw.com
239 points by gist  12 hours ago   200 comments top 27
tptacek 11 hours ago 7 replies      
Without getting into the scope of the 1001 statute (I think they're probably mostly right about it), I just want to chime in with a nit about the example they chose:

If you take a job at a health provider that you later learn is corrupt, and later knowingly transport false vouchers as part of your job while otherwise avoiding direct participation with the more overtly criminal parts of the enterprise, you are a criminal. What you're doing is wrong. Put into that situation unexpectedly, you must either quit, or immediately report your employer (and, presumably, then quit). You can't knowingly accept a paycheck from a criminal enterprise if doing so requires you to help carry out criminal actions. The law says that's criminal, but even if you don't care about that, so does the social contract.

Are you less culpable than the owners? Sure, of course you are. But this article chose I think a really terrible example, one that creates a false sense of what it means for someone to be incidentally and unjustly swept up in a crime they themselves tried to avoid.

We in this industry all need to be taking more responsibility for our individual actions and the net impact they have on society.

mnm1 11 hours ago 2 replies      
I don't see it prudent to submit to an interview even when the agents assure you that the interview is only administrative or that they are not investigating you. The agent can lie without penalty. Unless this is something official, in writing, from the AUSA's office that was verified by one's lawyer, it's likely to be a trick. It's a standard tool in any law enforcement officer's arsenal. I always see scenes on TV where in an emergency people talk to the FBI telling them where the perp went or some other details and I think: if this were real life, I would expect the FBI to be turned away every time, even when lives are at risk and people will die because the FBI cannot get the information they need. It's too bad that this is the kind of society we chose to create, one where trying to help law enforcement is simply too risky to oneself.
colanderman 9 hours ago 0 replies      
> Whether you speak, what you say and how and when you say it can have a profound effect on your future when you find yourself involved in a white-collar criminal investigation.

It is very disturbing that we live in a society with laws so obscure to the common person. Unless you know this "one weird trick to avoid indictment under Title 18, United States Code, Section 1001", your life is held potentially at the whim of some random prosecutor. Such detachment of one's legal fate from one's actions has no place in modern lawful society.

Information asymmetry is one of the primary sources of power disparity. Dividing those subject to whimsical prosecution into the "in-the-knows" and "know-nots" whether through this law, civil forfeiture, the obscure tax code, or pay-to-play building codes is a progenitor of the police state. Free society requires transparent law.

xoa 10 hours ago 4 replies      
Ken White, a criminal defense attorney and former US prosecutor at Popehat.com, writes about these subjects, and has an entire series devoted to this sort of thing under the what-it-says-on-the-tin tag of Shut Up:


His writing makes for accessible, knowledgeable and amusing posts on a very serious topic and are extremely well worth a read. But the summary version is pretty straight forward: there is no such thing as an out-of-the-blue innocent visit from Law Enforcement, ever. This is kind of common sense when you stop think about it: agents are not free and budget is not infinite. If they are actually devoting an actual warm body to talk to you, particularly a warm body to visit you, it is always, always potentially serious. It is not on a lark, they did not roll some dice and have your number come up for a community chat. They're talking to you for a reason and given their fundamental purpose that reason may be quite bad for you, whether you did anything wrong or not. The true super power of government is patience and grinding, relentless inertia. By the time they talk to you odds are high that they have already done their research, extensively. They have a legal theory and narrative already in mind. If the Feds are knocking it's time to get a lawyer, period, particularly if you're fully innocent.

mpweiher 10 hours ago 1 reply      
"But why, you may ask, should law-abiding citizens be alarmed about this statute? Don't the feds only pick on big-league liars?"

That's the definition of a police state.

"Police state is a term denoting a government that exercises power arbitrarily through the power of the police force." -- https://en.wikipedia.org/wiki/Police_state

Overtonwindow 10 hours ago 0 replies      
This is really excellent advice. I work in legislative affairs with lobbyists and lawyers. After 12 years working in DC with and for congress, I can say unequivocally there is absolutely nothing good that can come from voluntarily cooperating with the police. You should avoid them at all cost. Do not volunteer. Speak through your lawyer if you speak at all, and never allow yourself to get trapped. It's nice to think prosecutors won't go after small fish, but to them you are worth nothing, and they will hang you out to dry if it gets them even the most remote, minor, or inconsequential conviction.
bradleyjg 10 hours ago 3 replies      
To summarize:

"Tell the agent that you have an attorney and that 'my attorney will be in contact with you.'"

"Simply state that you will not discuss the matter at all without first consulting counsel and that counsel will be in touch with him."

"Just respond that you will consult with your attorney (or 'an' attorney) and that the attorney will be in touch."

"Simply repeat your mantra that you will not discuss the matter with him in the absence of counsel."

Animats 11 hours ago 1 reply      
Also worth noting is that FBI agents are permitted to lie in the course of their duties. So anything they say to you cannot be trusted.
tylercubell 10 hours ago 4 replies      
The part that sticks out to me is:

> It is crucial to note that affirmatively declining to discuss the investigation in the absence of counsel is not the same thing as remaining completely silent. If you are not in custody, your total silence, especially in the face of an accusation, can very possibly be used against you as an adoptive admission under the Federal Rules of Evidence.

I thought we had the right to remain silent. Can someone explain this?

rexf 10 hours ago 2 replies      
Stepping back a bit, do people have a lawyer on call? I haven't had to use a lawyer for anything, but I don't have a lawyer when it comes to telling someone that they have to talk to my attorney first.
andrewflnr 1 hour ago 0 replies      
Is it normal for people to just have a lawyer they can call? I don't. I wouldn't know how to find a lawyer for this sort of situation, or even what kind to look for. Am I looking for a criminal defense lawyer?
ghufran_syed 7 hours ago 1 reply      
Extremely relevant and important video on the same subject "Don't talk to the police" by a law school professor.https://youtu.be/d-7o9xYp7eE
hollander 11 hours ago 2 replies      
So if you lie to anybody in the US, and this person submits this lie to the government, it could be used to prosecute you? That is creepy!
ternaryoperator 1 hour ago 0 replies      
An attorney once added to the info in this article that if law enforcement come to your home and asks, "May we come in?" the answer is always "no." Step outside, pull the front door shut, and then tell them you won't speak without your attorney. If you consent to let them in the house, anything they see of interest is something they can act on.
dmacedo 4 hours ago 1 reply      
Why doesn't the US federal government publish a "your rights" like the UK does: https://www.gov.uk/browse/justice/rights

First thing I looked at when moving to the UK in case I get pulled over and know how to comply to law enforcement powers whilst understanding the limitations and my rights...

Are these published in the US, on a state level at least?

alexbecker 9 hours ago 1 reply      
I wonder what bug resulted in the string "government226128147that" being inserted. Clearly something to do with signed integer arithmetic, but where would that be happening?
rebuilder 11 hours ago 4 replies      
As a non-US resident potentially wishing to visit the USA, do I want to click this link or will it result in a flag being raised?
ryanmarsh 5 hours ago 1 reply      
This and other reasons are why I've drilled into my children "never, under any circumstances, talk to the police". It's a shame it has to be this way.
a3n 7 hours ago 1 reply      
> and if the agent promises you that nothing will happen to you if you tell the truth

I thought law enforcement is allowed to lie to you. Could they lie about this? IANAL.

Kenji 38 minutes ago 0 replies      
>How to Avoid Going to Jail Under 18 U.S.C. For Lying to Government Agents

Step 1: Belong to the political elite.

trhway 10 hours ago 1 reply      
no wonder what so many non-college educated people get imprisoned. It is like a minefield.
aleksei 11 hours ago 4 replies      
> Furthermore, a private employer can require you to cooperate with a law enforcement or regulatory investigation as a condition of continued employment.

.. How on earth is that possible?

forrestthewoods 11 hours ago 1 reply      
Obligatory: Don't Talk to Cops https://www.youtube.com/watch?v=i8z7NC5sgik

Everyone should watch this video. It may save your life.

gonzo 7 hours ago 0 replies      
tl;dr: lawyer up, and don't talk to the feds until you do.
geekamongus 10 hours ago 0 replies      
I thought the answer would be: run for office.
coolsunglasses 11 hours ago 3 replies      
Hopefully I arranged the images correctly. I've posted an imgur album of the article because a pastebin/gist is trivially haystacked


wehadfun 11 hours ago 0 replies      
Someone post the youtube video.
Postal: Open source mail delivery platform, alternative to Mailgun or Sendgrid github.com
640 points by rendx  21 hours ago   163 comments top 25
jitbit 9 hours ago 4 replies      
Great discussion here, not necessarily about the OP's link, but still learned a lot. Would love to contribute my 2 cents...

Our app[1] sends/receives several million emails per month. Not an exaggeration, it's actually seven figures.

Meaning it's more than 100k a day. Meaning it's 5-6 emails every friggin second. On average. It, of course, peaks during US daytime, up to 30 per second.

We tried a looooh-ot of solutions (all priced at THOUSANDS a month at this volume) including Mailgun, Sendgrid, SES etc, but finally settled to a tiny Ubuntu micro-instance on EC2, running Postfix. It has 1 gb of memory, costs us $4 a month and the CPU load rarely goes higher than 4%.

Of course you would need to get yourself familiar with SMTP, postfix, SPF/DKIM, mx-validation, blacklists etc. And by "familiar" I mean "learn it tothe core" :))

Another thing - you need to build-up reputation for your IP, cause email providers like outlook/gmail/yahoo will simply reject your emails if you start sending a LOT out of the blue. You have to build it up gradually, takes months to get there. Makes it a huge PITA when you need to change your IP :((

PS. If you need incoming email to call some external REST-api - postfix can launch a local php-script that does that. Not sexy but - $4 a month, right.

[1] https://www.jitbit.com/hosted-helpdesk/

dan1234 21 hours ago 12 replies      
Isnt part of the reason for using Mailgun, Sendgrid etc that you get to send via IP addresses with good reputation?
stephenr 16 hours ago 4 replies      
This sort of thing is fantastic to see, regardless of whether you want to run your own mail servers for this task.

That they provide a hosted service using the same stack is great to see: host it yourself, or pay them to host it for you. This is what great open source businesses can look like.

No "open core" where the good stuff isn't available for the community, and community efforts to implement the same thing get rejected.

No viral licensing like the GPL or jesus shit on a stick, the AGPL.

oblib 10 hours ago 0 replies      
I took a look at the github project and I hope they do great.

As a side project I've been working on setting up my own mail server using "Mail-in-a-Box" (https://mailinabox.email) for about a month now.

It's been a learning process but I do like the idea of having control over this. I've got it set up to send emails from a remote app server using a perl script and an email account configured on my Mac Mail app all working.

Mail-in-a-Box is really pretty sweet. It walks you through the install and has a very nice Control Panel that handles DNS and user account setup and it comes with the Roundcube email web client and ownCloud.

There are hurdles. I got a clean IP from DigitalOcean with no problem, but the domain name I'm using is new and has no reputation so when I tested it last week sending emails from the app server to my personal Gmail account the Gmail server responded saying:

"Our system has detected that this message is 421-4.7.0 suspicious due to the very low reputation of the sending IP address. 421-4.7.0 To protect our users from spam, mail sent from your IP address has 421-4.7.0 been temporarily rate limited."

I only sent about a dozen emails, all to my own Gmail account, so that seems to be a bit harsh.

Then it occurred to me why Gmail exists and it made a lot more sense. So, there are hurdles.

ksajadi 21 hours ago 7 replies      
We sent a lot of emails which makes services like Postmark or Mandrill very expensive. Since switching to Amazon SES, the cost has been much lower but the lack of individual email tracking has been a pain (in case a recipient claims they haven't received it or we need to track opens, etc).

This UI with an Amazon SES backend would be ideal.

pvsukale3 17 hours ago 1 reply      
Will this platform be actually usable for independent developers considering today's spam blocking scenario. How one should proceed with this in order to not get blacklisted while actually using it for the first time.
vanilla 18 hours ago 1 reply      
This seems to be the software behind appmail.io[1], a service just like mailgun and sendgrid.

[1]: https://appmail.io/

WA 11 hours ago 0 replies      
Awesome, I've been waiting for this, because I don't believe in the "delivery" promise of big brand mail providers. Most newsletters* go to spam, no matter if they're via Mailchimp, Aweber or transactional mail providers.

Maybe the big names work better with Gmail, since Gmail has quite an aggressive spam filtering. But neither I, nor most of my customers (Germans) use Gmail, so I don't care.

Edit: *most newsletters I receive

cryptarch 18 hours ago 1 reply      
Do you have a system to prevent trusted users from being given low-reputation ip addresses without them having to pay for a dedicated ip?

Something like, "if you don't pay for a dedicated ip, but have been a non-spamming client for a month, we move you to a higher-rep ip address pool"?

senic 21 hours ago 0 replies      
Interesting, I was just looking for something like this the other day, thanks HN! The software looks quite polished. Hopefully there'll be a dockerzied version to play with it.
brightball 17 hours ago 1 reply      
I'm most interested to see what their solution for handling INCOMING email looks like. Having used the inbound APIs with the others they are all pretty polished and reliable but have inconsistent APIs. I've always been a little bit concerned about how to handle high volumes of inbound email functionality if the prices on those services ever went up.
ohstopitu 16 hours ago 1 reply      
If I were to host this myself, I'd still need a static IP that had a good reputation. GCP and Azure both mention that we should not be hosting mail servers on their platforms (rather, they all suggest we setup mail servers + relays to a reputed IP).

How would I go about getting a static IP or a reputed IP?

jbverschoor 18 hours ago 0 replies      
I'm a big fan of mailgun. It is far better than mandril, sendgrid and probably ses - without a lot of the set up
spuz 19 hours ago 0 replies      
Does Postal allow you to set up an email group? I.e. an email address that will forward to a defined list of other email addresses any email that is sent to it? This is a feature of Mailgun but unfortunately it does not quite behave in the way we need with regards to setting the 'Reply-to' address.

I'm looking forward to seeing the documentation and setting up Postal on my server.

jccalhoun 15 hours ago 3 replies      
Is it common now to call email "mail?" I don't know anything about this area (never even heard of mailgun or sendgrid) and wondered if this was some kind of service for sending actual mail.
t3ra 20 hours ago 0 replies      
Sounds interesting. I have been looking for a proper mail Server for sending marketing only for a while now.

I'll wait for the doc to update but until then :

Does it do things like IP rotation?

Is it using postfix at the backend or its a complete mail Server

What kind of list management features does it have? (i am looking to compare with interspire)

nik1aa5 10 hours ago 0 replies      
So can I use this in front of Postfix?
yowza 21 hours ago 2 replies      
Not usable without a proper license.
hultner 21 hours ago 1 reply      
I suppose this is more like a open source alternative to PowerMTA? How would you compare them?
IgorPartola 18 hours ago 2 replies      
Now if only there was a decent self hosted alternative to MailChimp.
ramoq 16 hours ago 0 replies      
Question: if all these companies monitor outgoing SMTP traffic, how are people sending billions of spam messages a day? What's the loophole?
nbevans 20 hours ago 1 reply      
The reason why we use Mailgun is to avoid deploying and maintaining e-mail infrastructure which is very hard and high cost. We would rather keep paying Mailgun about $20/mo as this is cheaper by several magnitudes than the self-hosting option.
no1youknowz 20 hours ago 0 replies      
Has anyone tried elasticemail.com? I've seen their CEO post here before.

If you have, what's your experience with them vs Mailgun or Sendgrid?


madspindel 21 hours ago 3 replies      
This project fails to answer the question 'Why?'.
jpkeisala 19 hours ago 0 replies      
Is Postal and services like Mailgun solution for a functionality where email can be viewed/replied from personal email inbox and a web app? Then somehow magically these services routes mail to relevant inboxes and the web app like Zendesk etc?
Symantec CA Response to Google Proposal and Community Feedback symantec.com
105 points by mentat  6 hours ago   103 comments top 13
MichaelGG 6 hours ago 7 replies      
>require these applications to be recoded, recompiled and redistributed.

Aka "updated".

The entire post is basically "ok how about we be really good from now on and suffer no consequences, cause it'd be really shitty for us if we had to be penalised".

They also posture a lot talking about how big their customers are, almost boasting about how inflexible and slow these big companies are, as if that's somehow Google's or general Internet users' problem.

durkie 5 hours ago 4 replies      
> Embedded devices that are pinned to certificates issued by a Symantec public root to communicate to resources over the Internet or Intranet. Replacing these certificates would result in immediate failures and the need to recode and reimage the firmware for these devices.

> Mobile applications that have pinned certificates. Replacing server certificates would require these applications to be recoded, recompiled and redistributed.

Are either of these relevant? Isn't google's proposal to eventually have Chrome stop trusting the Symantec CA system? That change seems like it would have no effect on 1) embedded devices that aren't running Chrome 2) things that already are in place that trust the CA.

ahmeni 6 hours ago 0 replies      
There is a significant amount of "we will" language in here, rather than "we have started" or "have already begun". The choice of language speaks volumes that they only appear to be willing to take necessary audit action if Google agrees to it, rather than something that needs to be done regardless.
Aaron1011 5 hours ago 1 reply      
> This cohort is an important constituency that we believe has been under-represented to date in the public commentary that has been posted to the Google and Mozilla boards since large organizations rarely authorize employees to engage in such public discussions, particularly in an area related to security.

Are these large organizations somehow incapable of putting out official statements regarding CAs? If they're being 'under-represented', it's their own fault for not speaking up.

tyingq 6 hours ago 1 reply      
I don't think Google was soliciting for a counter proposal from Symantec. Will be interesting to see their reply, and whether it's a literal reply or just a version push of chrome with their original plan.[1]

[1] https://groups.google.com/a/chromium.org/forum/m/#!topic/bli...

Edit: They did ask for community feedback, comments on risk, etc. But they do already have a timeline. See link above.

bitmapbrother 5 hours ago 0 replies      
Symantec is just going through the 5 stages of CA grief. First they were oblivious to it, then they were angry about it and called Google irresponsible. They're now at the proposal stage. Next will be depression and finally acceptance for their incompetence.
chaz6 15 minutes ago 0 replies      
The irony is, for an industry based on trust, if this abuse is not penalized, then the industry as a whole is pointless.
lsh123 4 hours ago 0 replies      
CA infrastructure business is built on trust. Symantec lost the trust and got thrown out by Chrome. I hope that as the result Symantec will lose significant amount of CA business thus making a show case for other CAs to demonstrate that good processes and trust are key for staying in this business.
jupp0r 5 hours ago 0 replies      
Notably missing from their post:

This is how we make sure issuing certificates for gmail.com, etc will never happen again. An external audit every 3 months is not going to fix anything.

kaishiro 5 hours ago 1 reply      
Does anyone else get a blocking "Symantec Connect" and "Loading your community experience" for a good 10 seconds before it loads? Could just be my mobile connection but man is that lame.
dtemp 3 hours ago 0 replies      
I'm still trying to figure out if my 36 month wildcard carts from RapidSSL are going to be distrusted. Their intermediate is signed by GeoTrust which is owned by Symantec, and a blog post says with Chrome 59, certs with a period over 33 months will be distrusted. Chrome Canary on v60 shows them still working.
geofft 5 hours ago 1 reply      
> These customers include many of the largest financial services, critical infrastructure, retail and healthcare organizations in the world, as well as many government agencies. This cohort is an important constituency that we believe has been under-represented to date in the public commentary that has been posted to the Google and Mozilla boards since large organizations rarely authorize employees to engage in such public discussions, particularly in an area related to security.

... well that's their problem, right?

You can't simultaneously say "These are some of the most important organizations in the world and you'll cause worldwide chaos" and "Won't someone listen to these poor companies, I am the Symantorax, I speak for the cohort, for the cohort has no tongues."

revelation 6 hours ago 3 replies      
I love how none of the example "dependencies" they give should be using public CA in the first place.
Building Accurate Shipment Timelines A Sorted Affair flexport.engineering
87 points by rottencupcakes  13 hours ago   14 comments top 4
siscia 9 hours ago 3 replies      
Hi, I would like to share an idea i had a while back with you flexport folks.

I work in an IoT company that focus on the hardware side, while I am the one responsable for having the software running smooth.

Chatting about the new hardware piece that were coming my first though was the following.

Have a very small piece of hardware, transmitter, to ship along whatever you are shipping. The transmitter will regularly (even every 5 minutes) transmit its position so that would be possible to track the geographical coordinates of every lot of product that is shipped.

The difficult part is that the transmitter need a receiver (with Internet connectivity) in close range ~10km, the solution would be to install those receiver in the ship or in the truck or in the train.

The hardware itself is something that we are using every day and it is not a big issue, the only real problem that I see is the installation of those receivers.

Is this something worth solving?

aubreycw 12 hours ago 1 reply      
Author here! This was the first post of our new Flexport Engineering blog (we'll be adding more content soon). Happy to answer any questions about the article or how/why we're using kahn's algorithm.
cconsidine 12 hours ago 0 replies      
A+ technical gifs
sghiassy 12 hours ago 2 replies      
Nice article

Will any software be open sourced related to this?

Examining and Learning from Complex Systems Failures uptimeinstitute.com
20 points by wallflower  6 hours ago   4 comments top 2
a3n 3 hours ago 0 replies      
> 16. Safety is a characteristic of systems and not of their components.

> 18. Failure-free operations require experience with failure (Cook 1998).

These two things in particular caught my eye, especially as I'm not in system design.

So with 16, you can't just slap together components with sufficient -ility rating, you have to study and design the safety of the system as a whole. Those components, in two different designs, are exposed to different stresses in different ways, and live within different monitoring and mitigation systems.

With 18 ... I wonder what the value would be to a promising new engineer who opted to not take the big offer from Google or Apple, and instead worked down among the unwashed for awhile, gathering experience. There's probably a lot of value in having experienced failures in a system that has no relevant mitigation, and you have to figure out what to do next, as the first person to have seen it.

Luxurious to work at Google, where if it hits the fan, there's lots of systems and experience to handle it. Character building if you alone had to figure out how to keep the ship from sinking, from an "iceberg" that no one had thought of and planned for.

awinter-py 4 hours ago 1 reply      
very much want to read this but uptimeinstitute is currently, and very ironically, down

(maybe because of a complex system failure? or just load)

The Whale 500ish.com
91 points by madspindel  15 hours ago   31 comments top 10
nl 20 minutes ago 0 replies      
It's weird to see all the people here complaining about how the article is about the market cap, when that's not only what the article is about.

Sure, the market cap is interesting, but this is an interesting article because it makes a compelling case that a $400B company is cheap even when Walmart outsells it and FedEx out delivers.

It was only a few years ago people complained about how Amazon didn't make any profits because they would reinvest them all in growth, and how that was impossible to sustain that.

They were right, but not in the way they thought. It does seem to be impossible to keep investing all the profits in growth - they are just making too much!

So, it's the combination of this growth AND the market cap that is unprecedented. No one can tell if it is overvalued, but complaining about talking about the Cap is missing a pretty amazing story.

Upvoter33 13 hours ago 1 reply      
I do agree Amazon is amazing and frightening all at once. However, I really hate all of this analysis based on market cap instead of revenue/profit/etc. It's just like so much fantasy.
callmeed 12 hours ago 3 replies      
Good essay but I don't like it when people use market cap in these kinds of comparisons.

Don't forget that Amazon has 2x the market cap of Wal-Martbut only slightly above 1/4 of their the annual revenue.

debacle 7 hours ago 1 reply      
I'm intrigued by the coming future of the megacorp. Philip K Dick wrote many stories where the giant corporation is the tribal, protect-our-own protagonist against the faceless, authoritarian government. Paycheck being the most memorable example. It's going to be interesting when companies don't even have to pretend to be subservient to the government anymore.
fela 11 hours ago 0 replies      
The current stock price (and thus market cap) already assumes future growth. The market cap would increase further only if growth exceeds the current expectations of investors. (Or due to other factors unrelated to growth).
woliveirajr 12 hours ago 0 replies      
I'm curious on how Amazon will take the world. I can read and understand how the America buys a lot of stuff from it, how good the delivery is, etc., but in some countries they are barely beginning. And some local companies tried to do the same model and couldn't. They haven't bankrupted, but simply don't have the same size, same relevance as Amazon has in the US.

Will that model still be applied worldwide?

EduardoBautista 11 hours ago 5 replies      
In my experience, the quality of Amazon is average at best. The only reason why they are winning is because they are cheap. The Walmart model basically: Cheap wins over quality.
untilHellbanned 10 hours ago 1 reply      
Just my opinion, but Amazon, FB, Apple, and all these companies that these thought leaders like MG Siegler, Stratchery, and Gruber write about aren't interesting.

They won at the same game by outcompeting others at selling commodity junk. Great. How? Same way. They had leaders whose business won the lottery and simultaneously didn't care about other humans to the same infinitesimally small way necessary to win that big (one can lose big with that give-no-f*cks mindset too, hence the lottery).

Selling crap isn't interesting. Show me somebody that makes a similar-sized dent in human health, the environment, or world peace then I think the fawning Medium articles will be worth it.

kolbe 9 hours ago 1 reply      
The author regularly conflates Amazon's price and Amazon's value. I understand articles like this getting traction on pump and dump finance forums, but I'm a little disappointed to see it on the front page here.
bsder 10 hours ago 5 replies      
Except that suddenly there are lots of people NOT buying at Amazon anymore.

In fact, most of my purchases go into 3 buckets:

1) Standard but I don't want crap--probably I'm going to a retailer website and picking it up (Target, Frys, etc. -- NewEgg is my exception to actually having brick and mortar). If I can pick it up, it's probably at least a notch or two above crap.

2) Total crap--probably ordering from Alibaba--not Amazon.

3) Something I need to go to a store for

To me, Amazon is being used decreasingly by everybody around me.

Hanging up my spurs lowercasecapital.com
356 points by gfitz  16 hours ago   158 comments top 21
dmitri1981 15 hours ago 4 replies      
Chris Sacca's background story is little known and he talks about it at length with Jason Calacanis. The part about him losing millions while still at university is as shocking as his recovery from the setback is amazing. Check out https://www.youtube.com/watch?v=6VOQnK7O2To
ryandrake 3 hours ago 3 replies      
Wow, 42 years old, huh? Hard not to be a little jealous. Being pretty close to his age, it's maddening to think that with a different roll of the dice I might have been that person. Or I might have been unemployed and a heroin addict in Appalachia. Life seems so random and non-deterministic.

Good on him. I sometimes wonder why more people who are insanely rich don't just retire and spend the rest of their lives doing something else...or doing nothing. I think of the C-level execs in my own company--they each have so much that they and their families for generations could literally do nothing for the rest of their lives and still continue to get richer via interest. Why go on working?

luhn 14 hours ago 5 replies      
I just started listening to the Startup podcast, which features Chris in the first episode. He seems to be a very intelligent and talented guy while still remaining grounded. I wish him well with his future endeavors.

Also, ten minutes after reading the article, I just realized the pun in the name "Lowercase Capital."

canistr 14 hours ago 2 replies      
Has anyone else noticed that a bunch of people leaving the investing scene/liquidizing in the past year are part of the same circle? Specifically Sacca, Tim Ferriss, and Richard Branson?

I'm certain there might be more people.

jtraffic 13 hours ago 1 reply      
It's interesting to think about this decision in the context of Nassim Taleb's writing. If you have a large sample of investors, some of them would be good just by chance. It's possible that Chris Sacca is not good just by chance, but we can't really know, and that is an excellent feature of walking away at this point. Most people will just tell themselves that Sacca "beat the game" and left. It could easily be that he got super lucky and realized it.
downandout 11 hours ago 0 replies      
The problem I have seen play out with friends that have tons of relatively easy money fall into their laps is that it saps ambition. Waking up every morning knowing that you have enough money for a hundred lifetimes can be freeing, but also means that you aren't required to do anything at all. Many people in that situation choose exactly that.

My understanding is that Chris isn't the nicest guy in the world, so perhaps he's doing entrepreneurs a favor by backing other VC's. Some of his capital will still be out backing startups, but their founders won't have to put up with him personally.

padobson 13 hours ago 1 reply      
tldr - compulsive gambler who won the Twitter lottery vows to quit playing the lottery and move on to making podcasts and buying political influence
winter_blue 7 hours ago 2 replies      
> I am spending a great deal of time meeting with all of the beautifully spontaneous and decentralized organizations that have been popping up in the wake of our electoral calamity as well as dozens of candidates at all levels of government.

Does anyone know where (or how) to find these organizations?

I'd love to get involved in some of these organizations that are trying to build a better future for our country, and try to contribute/help in any way that I can.

pdog 14 hours ago 1 reply      
"As fate would have it, Jay's status appears to be at an all-time high, perfect time to say goodbye."


jakelarkin 12 hours ago 1 reply      
the 2010-2015 era of generous outcomes for early-stage startups is over. No more easy follow-on rounds or acquisitions. Being VC becomes a lot harder so all these non-institutional personality guys - Sacca, Tim Ferris, Ashton Kutcher, and Russ Hanneman types are going to move on.
jpeg_hero 15 hours ago 4 replies      
This looks like a really easy marker that we've topped out on unicorn/big secondary market.

When the smart big players start to take their chips off the table you can be sure this thing has peaked.

sremani 12 hours ago 3 replies      
A very nice write up ruined by very partisan ramblings in the end. There is nothing wrong in believing in liberal cause, but to consider the other side as devil incarnate is a bit disturbing.
brianzelip 11 hours ago 0 replies      
Always loved this website design whenever links to it get shared here.
angersock 13 hours ago 6 replies      
> As a rich white guy, being an activist/loudmouth in the #resistance often means taking up political positions that are against my own apparent self-interest. These oppressive zealots in the White House...

> You mean beyond fighting a despotic regime...

> I assure you, thats not going to happen. Nevertheless, I am spending a great deal of time meeting with all of the beautifully spontaneous and decentralized organizations that have been popping up in the wake of our electoral calamity as well as dozens of candidates at all levels of government. I find so much hope in the new wave of leaders and builders who are standing up during these times and I want to be there to support them. I will have more to say about this in the days ahead, but, you know the drill.

I'm sure that this is a common sentiment out in the Valley, but it's really kinda distressing to see people with access to so much capital writing with such melodrama.

There's so much wrong at the local level--homelessness, education, people broke with medical problems--and yet these folks seem a lot more concerned with a strawman oppression.

lacampbell 6 hours ago 1 reply      
I didn't know venture capitalists had to wear spurs.
forgottenacc57 4 hours ago 0 replies      
Wish I was hanging up my spurs.
gtallen1187 6 hours ago 0 replies      
Beautifully written, thank you Chris!
ghaff 14 hours ago 2 replies      
>no better way to create technology than startups

How did the Koolaid taste? You certainly must have drunk enough of it.

brilliantcode 13 hours ago 1 reply      
So smart money is leaving startup/VC/SV...it'd be interesting to see who's been swimming naked all these years when the bubble bursts this Fall

edit: oooh, definitely a touchy subject around here ;)

kevinmannix 16 hours ago 3 replies      
Chris Sacca is one of the few VCs that people outside of the SV bubble and even the tech world know and associate with the valley.

He, along with Mark Cuban & potentially a few others will likely be part of the few of this era remembered many decades from now when history gets consolidated into a few figures, much like Gordon Moore of Intel & William Shockley of the Mountain View of the 1950's.

staunch 15 hours ago 2 replies      
He's in the extremely privileged position to fund ten thousand people's potentially world-changing dreams. Instead, he's going to squander it podcasting in his pajamas.

We need someone with the ambition Paul Graham had when he launched Y Combinator.

Silicon Valley is still not a meritocracy. There are still tens of thousands of ambitious people not receiving funding. There's still no way for the early users of massive products to buy equity and lift themselves from the lower class.

There is no greater lever in the world than technology, no better way to create technology than startups, and no more sure way to create great startups than by funding them.

How the Carolinas Fixed Their Blurred Lines (2014) nytimes.com
49 points by Tomte  11 hours ago   20 comments top 6
SwellJoe 7 hours ago 1 reply      
Maybe not relevant, but I grew up in Greenville, SC, which is somewhat close to the border with NC. Once, I went on a Boy Scout camping trip near a Civil War battlefield in the mountains along the border. After visiting the history center on the site, our scout leader told us to hike back to camp, with an older boy in the lead (while the scout leader drove back). Because the hike in had been quite long and seemed roundabout, a small subset of us decided to take a "shortcut" that our friend Finley insisted would get us back to camp sooner (which I guess was important because we had vital "set things on fire" projects to get back to).

We set off from the main group, and away from the well-defined trail; I don't remember if the boy in charge objected to our innovative approach to returning to camp, or just couldn't be bothered with it, but they went on ahead without our small adventure crew. The main group made it back to camp, as expected, about two hours later.

Our group, on the other hand, was still wandering through the woods as dusk approached about four hours later. Before panic set in, we luckily heard a truck off in the distance...so, we headed for it, and found a road. We assumed it was the road where the campground was located, and figured we'd be back to camp in no time (surely we were really close, given how long we'd been walking). An hour later, we saw a sign..."Welcome to South Carolina". We'd walked from South Carolina into North Carolina, and were many miles from camp.

The scout leader found us a few hours after dark; we were on the wrong road, going the wrong way, and had likely crossed back and forth from NC to SC a couple of times in our hike. We got back to camp around 11:30PM. Henceforth, getting lost was called a "Finley Shortcut".

This story doesn't have any real point, but I'd guess the border that we crossed a couple of times during that hike has since changed.

protomyth 8 hours ago 4 replies      
"two states better known for philandering politicians and restrictive voter ID laws than progressive politics"

What does this quip add to the article? Was it a necessary insult? Would the author make the same type of insult about Oregon and California?

jffry 8 hours ago 0 replies      
Follow-up: The changes clarifying the border between North and South Carolina were put into place last year: http://www.newsobserver.com/news/politics-government/article...
mindcrime 2 hours ago 0 replies      
Not sure if this spot was included in the change or not, but for a long time there was a little piece of South Carolina that was not accessible by land without driving into North Carolina first.


aftbit 2 hours ago 0 replies      
Here's a nice followup article that discusses the outcome for Lewis Efird's gas station, as well as other people impacted by the change.


dayburner 4 hours ago 0 replies      
I grew up in Waxhaw NC. Andrew Jackson was born in the area and both North Carolina and South Carolina claim to have the birthplace of the president.
Grabbed by Humboldt Squids for Science (1991) latimes.com
31 points by YeGoblynQueenne  10 hours ago   1 comment top
whitef0x 2 hours ago 0 replies      
Accompanying video can be found here https://www.youtube.com/watch?v=9Fkl312lldQ
A vigilante trying to improve IoT security gizmodo.com
273 points by jgrahamc  17 hours ago   202 comments top 22
086421357909764 15 hours ago 17 replies      
It's all fine and well until one of those improperly configured devices are a medical device or something critical. Yes I understand that's part of the problem, but proving a point with risk isn't the right answer either. Every Dialysis machine i've seen runs windows xp, which any security professional will tell you is game over, but given the market hasn't provided an alternative, it's becomes a necessity to figure out how to protect these improperly updated / configured / designed devices.

Fandom of actions that impact others in a negative way is bad, and one day someone will do something they feel is right that impacts you and you'll say.. well that's not fair.

intrasight 31 minutes ago 0 replies      
I am fascinated by the somewhat Darwinian trajectory that this might take. Let's project forward ten or twenty year to when that smart lighbulb has the computing power of 1990s era supercomputer. Might all the lighbulbs in my neighborhood form an intelligent swarm? Will the be engaged in inter-swarm battles? It's not like there's an "off" button. Has any good sci-fi explored this topic?
ihodes 15 hours ago 0 replies      
This is a more in-depth source for the same story: https://arstechnica.com/security/2017/04/brickerbot-the-perm...
Analemma_ 13 hours ago 0 replies      
Uh-oh. Did somebody take my advice? https://news.ycombinator.com/item?id=12612539#12612809
lend000 13 hours ago 2 replies      
I toyed with a similar idea that would be limited to subnets or non-routable IP space, and open-source/community-driven, but I had to take it down almost immediately due to bad press/backlash. There's really no way to address this without government regulation on ISP's to assume the external cost of botnets coming from devices on their networks. And the only way to justify that is to modify our computer crime laws to allow them to scan, patch, maybe even brick (or just turn off the customer's Internet and notify them) when vulnerable devices are found.
dec0dedab0de 10 hours ago 1 reply      
I see a lot of people blaming the manufacturers, or blaming the hacker. Then coming up with analogies to support their point of view. I blame the users and don't feel bad for them at all. The analogy I'm going with is if one of your neighbors bought a canon as a piece of art, and left it pointed at your house. Ignorance is not an excuse.
orng 15 hours ago 4 replies      
Slightly OT, but not too long ago I read that it is not uncommon for viruses to remove other known, competing, malware. Does anyone know if anyone has ever made a virus who's only purpose is to remove other malware? Perhaps the same aggressive approach used by Janit0r is needed to stop the spread of worms, kill off botnets etc.?
jahbrewski 11 hours ago 1 reply      
As someone who works as a software consultant for many IoT and connected device companies, how can I increase my understanding of IoT security? How can I ensure the devices I work with are secure?
brudgers 16 hours ago 0 replies      
"Something Wonderful has Happened"


fruzz 15 hours ago 4 replies      
It takes a special kind of entitled to destroy people's things and to then blame others (the manufacturers) for it.
zitterbewegung 11 hours ago 1 reply      
The writer of the story really tries hard to make vigelante justice narrative and glorify someone who is causing real damage to computer systems. We saw the same thing with the hack of Ashley Madison . They make the original vendors out to be scumbags. Things are much more complicated . Yes vendors and websites should keep things more secure. If you really want iot to be more secure I don't believe that large or small hacks is the best way to do this. The consumer is really the one that loses here.
pavel_lishin 12 hours ago 1 reply      
> if somebody launched a car or power tool with a safety feature that failed 9 times out of 10 it would be pulled off the market immediately. I dont see why dangerously designed IoT devices should be treated any differently

Really? He doesn't see how a car is different from a webcam? And why there are different safety standards for each?

Their goal is laudable, but this seems like a fun way to engage in vandalism while hiding behind an ideological aegis. The sort of thing I'd do when I was 15.

tunap 5 hours ago 0 replies      
Didn't a grey hat similarly flash a ton of old routers following heartbleed? Search isn't providing results atm, but I do recall an uptick in retail routers failing post HB news wave, with little mentioned as to the "why". If memory serves, it didn't "brick" them, it broke DHCP(no longer assigned dynamic addressing; WAN or LAN).
general_pizza 13 hours ago 1 reply      
The method they're describing is only permanent for devices without a removable startup disk, right? If they run this on my raspberry pi, for example, just reformatting the sd card and following the same process as when I first got it should immediately fix this.
Pica_soO 9 hours ago 0 replies      
Shouldn't be so merciful to brick it.Should have taken over some garage door openers, measured the average time between open and closing, and then close it suddenly after t == t_Signal+(t_Average)*1/3.Security is when your door is not trying to get into your car.The carcrackodile would raise awareness.
NicoJuicy 16 hours ago 0 replies      
This is perhaps the only way. Get the customer to irritate security exploits.

Nice thinking though

flukus 5 hours ago 0 replies      
Is this how we solve security? An army of white botnets in a never ending war with an army of black botnets?
WalterBright 6 hours ago 0 replies      
It's simple for manufacturers to make their devices secure from corruption. Put the firmware in ROM. Malware will not survive rebooting the device.

If you really must be able to update the firmware, add a physical "write enable" switch, not a software enabled one.

draw_down 14 hours ago 0 replies      
That's terrible but also kind of awesome. Remember, these things are unsecured and they're going to get owned anyway, it's just a matter of time. That doesn't make this right, but it is important context to keep in mind.
nickpsecurity 15 hours ago 0 replies      
It's what I said people should do. Kind of like 2nd Amendment taken against 3rd-party devices that nobody will do anything about. It might also generate demand for more secure devices on consumer side or liability on supplier side for same. Good to see someone is doing it. There were quite a few other people wanting to see these bricked on last HN thread about it:


hyperhypersuper 14 hours ago 1 reply      
Can somebody add "brickerbot author" to the title to have at least a bit of information.
grzm 13 hours ago 0 replies      
Actual article title: "This Hacker Is My New Hero"
       cached 27 April 2017 07:02:01 GMT  :  recaching 1h 6m