hacker news with inline top comments    .. more ..    22 Feb 2017 News
home   ask   best   2 years ago   
1
Seven earth-sized planets discovered circling a star 39 light years from Earth nature.com
1065 points by ngoldbaum  4 hours ago   457 comments top 51
1
dmix 4 hours ago 19 replies      
My favourite part of the press release:

> The planets also are very close to each other. If a person was standing on one of the planets surface, they could gaze up and potentially see geological features or clouds of neighboring worlds, which would sometimes appear larger than the moon in Earth's sky.

> In contrast to our sun, the TRAPPIST-1 star classified as an ultra-cool dwarf is so cool that liquid water could survive on planets orbiting very close to it, closer than is possible on planets in our solar system. All seven of the TRAPPIST-1 planetary orbits are closer to their host star than Mercury is to our sun.

https://www.nasa.gov/press-release/nasa-telescope-reveals-la...

2
e0m 4 hours ago 18 replies      
To get a spacecraft there it would take:

Accelerating at 1g the 1st half, and decelerating at 1g the 2nd half, the traveler would experience 7.3 years of time. For observers it would take 41.8 years at a max speed of 0.998c

If you had a near perfect hydrogen -> helium fusion engine, it'd take about 6 million tons of fuel (about the mass of the Pyramids of Egypt or 2,000 Saturn V rockets)

3
6502nerdface 4 hours ago 1 reply      
> Gillon says that the six inner planets probably formed farther away from their star and then migrated inward. Now, they are so close to each other that their gravitational fields interact, nudging one another in ways that enabled the team to estimate each planet's mass. They range from around 0.4 to 1.4 times the mass of the Earth.

Given that these planets are so close to each other, and interacting with each other gravitationally, how likely is it that their orbital arrangement is stable over geological time?

4
cletusw 4 hours ago 2 replies      
Apparently finds are close enough to us that we should be able "to follow up, to see, for example with the James Webb Space Telescope that we're going to launch [next] year, the atmospheres and also to look at bio-signatures, if there are any" !
5
jobu 4 hours ago 0 replies      
The article mentions the star being an ultra-cool dwarf star, but doesn't give an explanation of what that means. Here's some info from Wikipedia that I found useful:

TRAPPIST-1 is an ultracool dwarf star that is approximately 8% the mass of and 11% the radius of the Sun. It has a temperature of 2550 K and is at least 500 million years old. In comparison, the Sun is about 4.6 billion years old and has a temperature of 5778 K.

Due to its mass, the star has the ability to live for up to 45 trillion years ...

https://en.wikipedia.org/wiki/TRAPPIST-1

6
r721 4 hours ago 6 replies      
Wiki says "TRAPPIST-1 is an ultracool dwarf star that is approximately 8% the mass of and 11% the radius of the Sun. It has a temperature of 2550 K and is at least 500 million years old."

https://en.wikipedia.org/wiki/TRAPPIST-1

Too young star for alien life hopes?

7
rlanday 2 hours ago 2 replies      
Now technically, according to the International Astronomical Union, these bodies can't be planets since they don't orbit the sun:https://www.iau.org/static/resolutions/Resolution_GA26-5-6.p...

"1. A planet is a celestial body that(a) is in orbit around the Sun"

Makes me wonder how much this definition of a planet was motivated by the desire to be able to give elementary schoolers a nice small set of things to memorize.

Edit: actually I might be incorrect about this, the resolution is titled "Definition of a Planet in the Solar System", I'm not sure the IAU actually has a definition of what a "planet" would be outside the solar system, but they may be open to the idea that they exist :)

8
Taniwha 1 hour ago 1 reply      
So how come when NASA tells us there are tiny invisible planets light years away we all applaud, but somehow when they point to the evidence of climate change right here right now it's all a big con?
9
sizzzzlerz 4 hours ago 0 replies      
Once again, the universe shows that not only is it strange, it's stranger in ways we can't possibly imagine. This is truly a golden age of space science and, as Kay said in M.I.B., "I wonder what we'll know tomorrow.".
10
petrikapu 4 hours ago 1 reply      
Three of those discovered planets are in habitable zone

https://www.dropbox.com/s/bdeog7ta7i4rb7c/alien%20planets.pn...

11
eb0la 4 hours ago 1 reply      
At Voyager-I speed (17km/s) we need just 688k years to get there.Not bad considering the humble satelite doesn't accelerate anymore.
12
tuyguntn 3 hours ago 0 replies      
Every time I hear such news, I try to imagine how it feels being there, billions of miles away, looking at sky and saying "wow", I don't think we will be able to travel to distant planets in our lifetime, but thinking about it and trying to create that feeling always feels amazing to me.
13
webmaven 4 hours ago 0 replies      
14
rubicon33 4 hours ago 5 replies      
I was reading up about earth alternatives recently, and upon realizing the distance was always going to limit us, no matter how perfect the planet, I realized we need to travel at the speed of light.

I think our "best shot" at that right now, is to digitize humans. If we could store consciousness in binary, we could then transmit it at the speed of light (like we do with data every day!). You'd need a receiver on the distant planet though. So, your first 'payload' would have to be the receiver, and it would need to travel the slow old fashioned way :(

15
c-smile 49 minutes ago 0 replies      
Therefore it appears as there are plenty of habitable planets out there. Numbers are big enough to conclude that other life is definitely out there somewhere so are the questions:

1. Where are they all?

2. How humanity will react on appearance of one of them? Will we finally stop our fights inside that sandbox and to focus on challenges that we are facing all together?

16
cletusw 4 hours ago 0 replies      
Link to live stream on YouTube (supports rewinding) https://www.youtube.com/watch?v=UdmHHpAsMVw
17
peterkelly 2 hours ago 0 replies      
12 parsecs?

No problem, buddy. We've got the best ship in town. If you've got the dough I'm sure me and my buddy Chewy can work something out for ya.

18
keyle 49 minutes ago 0 replies      
Here is another article on it with a bit more info on the planets

http://www.iflscience.com/space/breakthrough-in-search-for-l...

19
fsloth 2 hours ago 0 replies      
I would also like to point out: Giordano Bruno was right. https://en.wikipedia.org/wiki/Giordano_Bruno

Although, he had no proof for his intuition, so I cannot entirely cherish him as a martyr for modern cosmology.

20
fsloth 4 hours ago 2 replies      
"Due to its mass, the star has the ability to live for up to 45 trillion years"

So when the rest of the universe cools off and stars start to die, Trappist one keeps on shining and shining... We need to get there. But first, let's get rid of our genocidal tendencies.

21
coryfklein 4 hours ago 1 reply      
I'm having a hard time finding more data on the density of matter in interstellar space.

Wikipedia says

> The density of matter in the interstellar medium can vary considerably: the average is around 10^6 particles per m3 but cold molecular clouds can hold 10^810^12 per m3 [1]

But I imagine that interstellar gas would be easier to fly through than interstellar sand. Do we know the composition of matter in interstellar space?

[1] https://en.wikipedia.org/wiki/Outer_space#Interstellar_space

22
jonshariat 3 hours ago 3 replies      
This news is very interesting but how is this different than any of the recent planet discoveries? Anyone mind explaining the significance of this discovery over the others?
23
btkramer9 3 hours ago 0 replies      
Even though each of the 7 planets are likely tidally locked it seems a habitable zone on each planet is still a possibility [1]

[1] http://nautil.us/blog/forget-earth_likewell-first-find-alien...

24
libeclipse 1 hour ago 0 replies      
Just for curiosities sake, using the best technology humanity could get its hands on, what's the best case for the time it would take to reach that solar system?
25
elif 59 minutes ago 0 replies      
Now we just need a 40,000 generation ship.
26
chiph 3 hours ago 0 replies      
27
debt 25 minutes ago 0 replies      
That's still soo far away though.
28
scj 4 hours ago 0 replies      
It'll be interesting to see this reflected in the Drake equation (average number of planets per star that can potentially support life).

It'd be amazing if the number was above 1 (or terrible if you believe in the Great Filter hypothesis).

29
kristianc 4 hours ago 2 replies      
Do we know definitively that there are no other planets in this system?

The presence of a large, Jupiter-sized planet in a system is thought to be helpful for deflecting asteroid impacts. Obviously we are talking 'to scale' given TRAPPIST-1 itself is around the size of Jupiter!

The more we learn about this system is going to be fascinating though - the supposed inward migration of these planets may even help us understand more about how our own system formed.

30
5xman 3 hours ago 1 reply      
So, are there any plans to try to "communicate" with this planets? Or we are going to just sit and watch.
31
sigmaprimus 4 hours ago 3 replies      
Based on Voyagers current speed it would take approximately 18000 years to reach this system, the referance of a jet airplane taking millions of years seemed a bit misleading, with current technology and using gravitational boosting I would think that the time could be much less. Maybe it would be worth sending a probe? Granted I will never see it in my lifetime but it might be a project I could get behind.
32
bcaulfield 4 hours ago 2 replies      
Let's go
33
ausjke 2 hours ago 1 reply      
Just curious, how is this related to us considering there are ziga-billion-stars/planets not-yet-be-found in the universe, why is this news important?
34
neom 2 hours ago 0 replies      
My fav thing about the universe is the idea that evolution + ingredients in another order could create something totally bizarre and wonderful!!
35
johngalt 4 hours ago 2 replies      
Close orbit around a relatively cold star. Interesting. I wonder what their 'day' vs 'year' looks like. Also their atmosphere and surface pressure. They could be 7 copies of Venus for all we know.
36
dforrestwilson1 3 hours ago 0 replies      
4 separate front-page posts about this on H/N so far..
37
mkoster 2 hours ago 0 replies      
Well, lets hope on these guys to take us there ;)

Apparently they are building a FTL engine.http://www.spacewarpdynamicsllc.com/latest-news

38
staltz 3 hours ago 1 reply      
Is there information already on how the short distances between planets would affect G-forces felt on surface?
39
blueprint 4 hours ago 0 replies      
Let's do this.
40
bikamonki 2 hours ago 0 replies      
Ok Elon, only 39 light years away. When will you take us there?
41
ahmetyas01 47 minutes ago 0 replies      
I knew it.
42
yohann305 1 hour ago 1 reply      
it's worth mentioning that the light we receiving took 39 years to arrive to the telescope.So anything we see happened 39 years ago.

I thought it'd be nice to know...

43
huula 2 hours ago 0 replies      
See? I told you that seven is a magic number.
44
novalis78 3 hours ago 0 replies      
Let's change #OccupyMars to #OccupyTrappist ;-)
45
andrewclunn 4 hours ago 1 reply      
They're only one Kessel Run away!
46
cletus 3 hours ago 0 replies      
So this star is essentially going to live forever from what I understand (ie trillions of years). It's cool and the habitable zone is close to the star.

So I know nothing about this kind of star but I was reading something last year talking about risks to life. There's of course the obvious like being hit by comets and asteroids, gamma ray bursts and so on.

But there's also CMEs (coronal mass ejections). A CME from the Sun directly hitting the Earth would be devastating. The chance of getting hit by a CME is inversely proportional to your distance from the star just because you occupy a smaller arc from the star's perspective.

I wonder if this kind of star and having the worlds so close would pose a huge threat from CMEs. Does this kind of star even have the same number of CMEs as say the Sun?

47
mozumder 3 hours ago 1 reply      
So, who do we call to make sure the newly discovered planets are named Chimay, Orval, Westvleteren, Rochefort, Westmalle, Achel, and La Trappe?
48
desireco42 4 hours ago 1 reply      
Is this from NASA news conference that was announced for today? I couldn't find it in article? They said they have important announcement.
49
ant6n 3 hours ago 0 replies      
It seems that this could be an interesting system to be targeted by a FOCAL mission. It is the idea to use the sun as a large telescope by using its gravitational sense. One would have to have a telescope placed about 550 AU or so away from the sun, on the opposite side of the system you want to look at.

The magnification could be large enough to analyze features on exoplanets. My dream would be to build a telescope large enough, so that with the help of the gravitational lens of the sun we'd have a google-earth like view of the exoplanets.

http://www.newyorker.com/tech/elements/the-seventy-billion-m...

50
sperglord 4 hours ago 2 replies      
raises the question
51
h4nkoslo 4 hours ago 3 replies      
Uh oh.

The "great filter" hypothesis is essentially that the rarity of intelligent life has to be explained by some parameter of the Drake equation, and that whatever the "small" parameter is is either in our past or in our future.

If the "great filter" is the rarity of habitable worlds, then clearly we don't need to fear it, since we already found one. But if habitable worlds aren't rare, then it's more likely it lies in our future (e.g. global thermonuclear war, plague, difficulty of space travel, etc).

Thus things like discovery of exoplanets, bacteria on mars, etc should make us rather concerned.

https://en.m.wikipedia.org/wiki/Great_Filter

2
Tesla Says Model 3 on Track as Quarterly Loss Beats Estimates bloomberg.com
73 points by justin66  1 hour ago   29 comments top 6
1
paulpauper 18 minutes ago 2 replies      
People who lose money shorting Tesla are losing their shirts for the same reason as those who shorted Amazon: cash flow + growth matters more than profitability, PE ratios, or EPS. Tesla's car business is cash flow positive but the quarterly losses are due to infrastructure investments. This is how finance can be more subtle than meets the eye.

Tesla's car business is very profitable:

Tesla also reported an automotive gross margin excluding SBC and ZEV credit (non-GAAP), of 22.2% in the quarter, up from 19.7% a year ago, but down from 25.0% in Q3...

huge growth:

Looking at the future, Tesla said it expects to deliver 47,000 to 50,000 Model S and Model X vehicles combined in the first half of 2017, representing vehicle delivery growth of 61% to 71% compared with the same period last year.

reinvesting operating profits into investments:

The company also expects to invest between $2 billion and $2.5 billion in capex ahead of the start of Model 3 production and continues "to focus on capital efficiency while also investing in battery cell, pack and energy storage production at Gigafactory 1. It also forecast that both Model 3 and solar roof launches are on track for the second half of the year.

This company is firing (no pun intended) on all cylinders

2
mabbo 35 minutes ago 3 replies      
People get upset by the 'superhero' status of Elon Musk. Nerds everywhere worship the guy, buy his cars, watch his SpaceX launches.

But honestly, let him have it, I say. He's earning it. He's taking huge risks and succeeding where others weren't willing to try.

3
SaintGhurka 34 minutes ago 3 replies      
Is it just me reading too much financial news or does "beats estimates" sound like it was better than estimates?

Loss was 69 cents vs 53 cents expected.

Revenues beat estimates, though.

4
aerovistae 44 minutes ago 4 replies      
It's fun to read Mark Spiegel's twitter feed. I've never seen someone so desperately try to force a theory. I wonder why some people want to see Tesla fail so badly.
5
simonsarris 1 hour ago 0 replies      
From their letter (link to below if you want to read)

> Later this year, we expect to finalize locations for Gigafactories 3, 4 and possibly 5 (Gigafactory 2 is the Tesla solar plant in New York).

Should be an exciting conference call. (5:30pm EST)

http://files.shareholder.com/downloads/ABEA-4CW8X0/394401180...

6
Wizardgold 23 minutes ago 0 replies      
It seems the financial side of a company have to make a guess at position they expect to be in when they report to the shareholders. So long as they beat the estimates then everyone is happy. The guess does have to be with enough forward movement to show the company is worth investing into. It's all a bit of a tightrope to get it right enough the money stays in play so the company can continue doing what they want to do.
3
Tour of F# microsoft.com
168 points by dimitrov  3 hours ago   40 comments top 13
1
kornish 2 hours ago 6 replies      
One observation from watching Go and Rust gain popularity is that having an online code evaluation tool like https://play.rust-lang.org/ or https://play.golang.org/ can do wonders for adoption. People can experiment in a sandbox without having to hop into a development environment, and peers have an easier time debugging by easily sharing and reproducing problems.

For anyone interested in trying out F# online, looks like Microsoft Research has such a tool: http://www.tryfsharp.org/Create. Unfortunately looks like you have to create an account of some sort to share scripts, so these alternatives might be better:

https://repl.it/languages/fsharp

http://tryfs.net/

2
phillipcarter 1 hour ago 1 reply      
Author of the article here. Happy to see this up on HN!

The article is actually more of an annotated version of the F# Tutorial Script[0] which ships inside Visual Studio 2017 (also in other version of Visual Studio, but the Tutorial script is a bit different there).

You can get started with F# just about everywhere everywhere:

* Visual Studio[1]

* Visual Studio for Mac[2]

* Visual Studio Code (via Ionide plugins)[3]

* .NET Core and the .NET CLI (`dotnet new console -lang F#`)[4]

* Azure Notebooks (Jupyter in the browser via Azure) [5]

[0]: https://github.com/Microsoft/visualfsharp/blob/master/vsinte...

[1]: https://www.visualstudio.com/vs/visual-studio-2017-rc/

[2]: https://www.visualstudio.com/vs/visual-studio-mac/

[3]: https://marketplace.visualstudio.com/items?itemName=Ionide.I...

[4]: https://dot.net/core

[5]: https://notebooks.azure.com

3
steego 2 hours ago 0 replies      
Just a reminder: Scott Wlaschin's book, F# for fun is a great free resource for people interested in F#. It's available here: https://fsharpforfunandprofit.com/

If you have a short attention span, I recently started posting sped up screencasts on twitter that range between 1-2 minutes. https://twitter.com/FSharpCasts

If there's a feature you want to see, let me know. I take requests.

4
illuminati1911 33 minutes ago 0 replies      
For anybody curious/interested in F# or any programmer who is interested in safe (functional) programming, check this out:

https://fsharpforfunandprofit.com/posts/is-your-language-unr...

I might be slightly biased but in my opinion it's one of the best programming articles I have ever read.

5
omni 6 minutes ago 0 replies      
> /// Conditionals use if/then/elid/elif/else.

The "elid" intrigued me and I tried to look it up but couldn't find anything. Is this just a typo?

6
problems 1 hour ago 2 replies      
The problem I had going into this without a strong functional background is that often times to do practical things you're forced to work with .NET libraries - these .NET libraries are not nice functional libraries and don't encourage you to think functionally. Eventually I felt like everything I wrote was wrong and I just gave up on it.
7
jsingleton 1 hour ago 0 replies      
These is also FAKE (F# Make), a DSL for build tasks (https://fsharp.github.io/FAKE/). Similar to CAKE (http://cakebuild.net/), which uses C#.
8
aashishkoirala 20 minutes ago 0 replies      
I know it's already been mentioned, but I just wanted to endorse https://fsharpforfunandprofit.com/ again - you could not ask for a better teacher than Scott Wlaschin if you want to learn F#.
9
agentultra 1 hour ago 1 reply      
So much like OCaml... but I like the ability to annotate units! That's very cool.
10
ice109 6 minutes ago 0 replies      
does anyone know what the state of C/F# on Linux is?
11
davidgl 2 hours ago 2 replies      
A more concise cheat sheet: http://dungpa.github.io/fsharp-cheatsheet/
12
euroclydon 23 minutes ago 0 replies      
How difficult is it to get an F# kernel for Jupyter running on my Mac?
13
melling 1 hour ago 2 replies      
What's the quick start for using F# on the Mac? Can I get a good native development environment? Am I better off running a Windows VM to get better tooling?
4
API Design Guide cloud.google.com
313 points by andybons  6 hours ago   98 comments top 14
1
andreygrehov 5 hours ago 1 reply      
I would like to add Microsoft's API Guidelines [1] here, which is also a well written document and can be helpful to anyone designing an API.

[1]: https://github.com/Microsoft/api-guidelines/blob/master/Guid...

2
daliwali 3 hours ago 4 replies      
What they describe is not REST. Nowhere in this document mentions hyperlinks, a strict requirement of the REST architectural style.

The best analogy would be a simple web page, which usually contains hyperlinks that a client can follow to discover new information. Unfortunately, web developers' understanding of REST ends with HTML, and they re-invent the wheel, badly, every time they create an ad hoc JSON-over-HTTP service.

There is a standardized solution for machine-to-machine REST: JSON-LD [1], with best practices[2] to follow, and even some formalized specs[3][4]. To Google's credit, they are now parsing JSON-LD in search results, which is much nicer to read and write than the various HTML-based micro-data formats.

On a related note, REST has nothing to do with pretty URLs, naming conventions, or even HTTP verbs. That is to say, it is independent of the HTTP protocol, but maps quite naturally to it.

[1]: http://json-ld.org/

[2]: http://json-ld.org/spec/latest/json-ld-api-best-practices/

[3]: http://micro-api.org/

[4]: http://www.markus-lanthaler.com/hydra/

3
KabuseCha 5 hours ago 2 replies      
Fantastic Read!

But I am still looking for some books on good API-Design, anybody has any recommendations?

4
nevi-me 1 hour ago 0 replies      
Very interesting read! I like that GOOG is pushing gRPC more on their own services. I've been a gRPC user since Sep/Oct last year, and it's made developing for Android, Node.js, JVM, Python more pleasant from a networking perspective. The ease of just moving logic from Node.js to a Java gRPC server, and then redirecting the HTTP2 proxy to the right place, has been awesome.

I've started teaching some people in the team how to use gRPC, and we're def going to be using it where permissible on client projects.

5
etaty 5 hours ago 1 reply      
I am curious if anyone went to GraphQL without regrets?
6
pbreit 2 hours ago 3 replies      
What is current consensus on client libraries? Braintree for example requires that you use their client libraries where as Stripe makes them optional. With Google's gRPC thing I can definitely understand using libraries for performance. Otherwise, isn't making simple REST calls without custom libraries sufficient for most uses? Or if you want a library, something generic like Unirest [1]?

1. http://unirest.io/

7
RubenSandwich 5 hours ago 3 replies      
Seems pretty good. Specifically this part of the guide is pretty well written: https://cloud.google.com/apis/design/resources. One thing that is surprising to me however is that their is no mention of using HTTP Status Codes in responses.
8
arohner 5 hours ago 2 replies      
Step 1) document the endpoints enough that outside developers can write their own clients.

It took quite a bit of work for me to get a native Clojure client working to connect to the google cloud SDK. That was after wrestling with jar-hell around gRPC and calling the Java client from clojure, which is decidedly not pretty.

9
Dirlewanger 4 hours ago 2 replies      
Protocol Buffers...GraphQL...JSON-API...so many damn choices for API implementation! Next we need someone's essay of a blog post comparing/contrasting them all.

Also, the Protocol Buffers link in the 3rd paragraph is 404.

10
rodionos 3 hours ago 2 replies      

 HTTP Method DELETE. Payload: empty.
I know DELETE is not supposed to have any payload, but using PATCH is awkward if you have to delete multiple resources based on a query or a filter. You need to specify a 'delete' action as part of PATCH request which means the payload model has to be different. Just awkward.

11
somedumbguy22 3 hours ago 1 reply      
I wonder if someone from the Apigee team wrote these, as Google recently acquired Apigee[1], and the guidelines are mostly inline with what Apigee recommends.[2]

[1]https://techcrunch.com/2016/09/08/google-will-acquire-apigee...

[2]https://apigee.com/about/resources/ebooks/web-api-design

12
amingilani 5 hours ago 2 replies      
My biggest pain when designing a rest API is a standard authentication method that won't drive me crazy. So far i've always used 3rd party modules to implement different kinds of authentication but I never quite understood it in depth.

Apart from HTTP Basic Auth, but please don't use that.

13
jeppebemad 4 hours ago 1 reply      
I don't see the actual guidelines, only Contents, Introduction and Conventions. On iOS Chrome /Safari. Also the fixed buttons overflows.
14
camus2 5 hours ago 3 replies      
Please drop fixed headers from web pages. If you want easy access to the top of the page use anchor links instead. On a laptop headers often take a big chunk of available screen. It just pisses me off every time I see a page with a fixed header. All your reader aren't using imacs...
5
Physics as a Way of Thinking (1936) [pdf] osu.edu
69 points by 7alman  3 hours ago   4 comments top 3
1
killjoywashere 1 hour ago 0 replies      
Professor Smith was, at the time of the publication of this essay (in the Ohio State University's law journal), midway through a 20-year span as chairman of the Ohio State physics department.

He basically works his way through history to demonstrate the development of the modern experimental method and extrapolates that society would be best served by extending the scientific method to many more aspects of society (social and cultural issues, etc).

2
neutralid 18 minutes ago 0 replies      
Prof. Smith describes how social sciences could benefit from the modern, coordinate-free approach in physics. He also wishes for a tighter coupling in social sciences as we've seen historically with physics (theory) and engineering (practice).

Interesting article.

3
dest 1 hour ago 1 reply      
a tldr would be welcome
6
Writing an Interpreter in Go: The Paperback Edition thorstenball.com
166 points by misternugget  5 hours ago   45 comments top 20
1
joemi 27 minutes ago 0 replies      
Are createspace books still of a noticeably lesser quality printing than non-print-on-demand books? I saw a few a while (a few years?) ago, and the quality offended me, so I wrote off createspace. I'd be interested to learn that's not the case anymore.

(Just to preemptively clarify: The bad quality I mentioned was most noticeable when compared to a non-POD book. On its own, it looks OK-ish and you might not think anything of it, but when you look at it next to another book you can tell.)

The book itself seems pretty neat though! I'm a PDFs for tech books kind of guy, personally.

2
zerr 5 hours ago 2 replies      
It is becoming quite popular (or maybe it was always the case?) that someone learning some topic and at the same time writing a book about it. Interesting how it affects the quality of the content (versus books authored by persons with expertise in given topics).
3
staticassertion 39 minutes ago 0 replies      
I bought this book and I've been using it - but I'm writing the interpreter in Rust instead of Go. I really like it, and I think Go is actually a cool language for this due to its simplicity. I had very little Go experience but the language is drop dead boring, incredibly easy to pick up.

I'd recommend it.

4
dom96 2 hours ago 1 reply      
This is really awesome. What I love especially is that the source code is syntax highlighted, I really wish I could have gotten that for my book.
5
joshbaptiste 4 hours ago 0 replies      
Purchased an ebook copy (jfltech), while I will likely use Nim for my future toy language, this will help me immensely.
6
vortegne 5 hours ago 1 reply      
I will be getting one, because I loved the book so very much. It helped me tremendously when I was implementing my own little language.
7
tfryman 5 hours ago 1 reply      
I didn't see it anywhere, but does the print version come with the PDF version too?
8
rargulati 3 hours ago 0 replies      
Just got the paperback version - excited to dig into this. Signed up to the mailing list as well. Is there a central place for errata, or will that only be made available as updates to the pdf version?
9
Entangled 3 hours ago 1 reply      
Monkey is such a nice language, clean and syntactically lovable. I wonder if the same theory can be applied to make a space indented language like python or nim, I'd like to make one.
10
wwweston 2 hours ago 0 replies      
Well, if there's anything that might lure me back to Go, it's an escape hatch into another language. :b
11
wjh_ 3 hours ago 0 replies      
I think I might be buying this, seems like a good book! Have never heard of it before.
12
BlackjackCF 5 hours ago 0 replies      
I'm definitely buying this. Still working my way through the PDF version, and it's just so well-written and helpful!
13
munificent 4 hours ago 0 replies      
Ordered my copy. :)
14
xenihn 4 hours ago 1 reply      
I will definitely be grabbing a copy of this and going through it, thanks!.

Anyone have recommendations for a book or tutorial for creating a REST API with Go?

15
AYBABTME 4 hours ago 1 reply      
Anyone can share what sort of interpreter this is? Is it a walking-AST kind of thing, or something more advanced?
16
orloffm 5 hours ago 1 reply      
So what was the source format? Markdown?
17
betodelrio 4 hours ago 0 replies      
Interesting. I would like to purchase the PDF version.

Is there a PROMO CODE available?

18
baconomatic 5 hours ago 1 reply      
I wish I would've waited to buy that, print would've been much better!
19
Zikes 5 hours ago 0 replies      
Very timely! I'm in the early stages of writing a custom interpreter in Go, so this is perfect for me.
20
swah 1 hour ago 0 replies      
I've been recently writing a web app in Go - I like the libraries and the tools - but when pausing to write a test script in python I immediately realized Go can't be the right tool for regular CRUD websites. Python is a joy, I love dynamic patterns when writing code. Dicts and strings! It flows...

So, yay escape hatches!

At some point it was going to be a pattern (Java + jruby/ola bini's language)...

7
APIs, robustness, and idempotency stripe.com
97 points by edwinwee  4 hours ago   22 comments top 5
1
brandur 1 hour ago 1 reply      
I authored this article and just wanted to leave a quick note on here that I'm more than happy to answer any questions, or debate/discuss the finer points of HTTP and API semantics ;)

An ex-colleague pointed out to me on Twitter today that there are other APIs out there that have developed a concept similar to Stripe's `Idempotency-Key` header, the "client tokens" used in EC2's API for example [1]. To my knowledge there hasn't really been a concerted effort to standardize such an idea more widely, but I might be wrong about that.

[1] http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_In...

2
pbreit 3 hours ago 4 replies      
One thing about Stripe's API I have mixed feelings about is the liberal versioning. My experiencing with 100s of payment integrations is that they get done once and hopefully never touched again. I know most of Stripes updates are "additive" such that they are backwards compatible if coded liberally, but it can be confusing. Same with Lob.
3
vfaronov 2 hours ago 1 reply      
Curious that they dont mention HTTP conditional requests [1] even in passing. This mechanism is typically used for slightly different things, but you can, for example, make a PATCH request idempotent (in their sense) by adding an If-Match header to it. Id say that Idempotency-Key itself may be considered a precondition and used with status codes 412 [2] and 428 [3].

By the way, WebDAV extended this mechanism with a general If header [4] for all your precondition needs. Im kinda glad it didnt catch on though...

[1] https://tools.ietf.org/html/rfc7232

[2] https://tools.ietf.org/html/rfc7232#section-4.2

[3] https://tools.ietf.org/html/rfc6585#section-3

[4] https://tools.ietf.org/html/rfc4918#section-10.4

4
josephorjoe 3 hours ago 0 replies      
Nice, clear article. I've always been impressed by the usefulness and clarity of Stripe's documentation. I should pay more attention to their blog.
5
Hydraulix989 1 hour ago 2 replies      
I'm shocked at how few HTTP libraries on GitHub properly handle exponential backoff, let alone retries.

Do the Internet a favor, and file an issue with your favorite HTTP library asking them to implement exponential backoff.

I haven't found anything in JS that does this properly though. Do people really just write apps that crap out upon the first HTTP request failure?

The best library I have come across is actually SquareUp's OkHttp (the payment processing companies seem to be the only ones getting this right).

8
Announcing DatHTTPD pfrazee.github.io
41 points by bpierre  2 hours ago   7 comments top 3
1
pfraze 2 hours ago 1 reply      
This is a part of the Beaker p2p browser project [1]. This server lets you host websites over the Dat protocol with DNS shortnames, as well as HTTPS as a fallback for (let's say) "legacy" browsers.

Happy to answer questions.

1. https://beakerbrowser.com/

EDIT: direct link to the repo https://github.com/beakerbrowser/dathttpd. We also have Prometheus/Grafana integration which is pretty handy; it's currently the easiest way to watch the health of a swarm.

2
p4bl0 1 hour ago 0 replies      
Has anyone tested Dat and also knows ZeroNet and IPFS? How do they compare? Dat seems very similar to ZeroNet.
3
webmaven 2 hours ago 1 reply      
This is really cool. Nice to see the Dat protocol[0] getting more uses and implementations.

[0] https://datproject.org/

9
Demangling C++ Symbols in Rust fitzgeraldnick.com
142 points by andrew3726  6 hours ago   17 comments top 6
1
kbenson 3 hours ago 0 replies      
Tom Tromey, a GNU hacker and buddy of mine, mentioned that historically, the canonical C++ demangler in libiberty (used by c++filt and gdb) has had tons of classic C bugs: use-after-free, out-of-bounds array accesses, etc, and that it falls over immediately when faced with a fuzzer. In fact, there were so many of these issues that gdb went so far as to install a signal handler to catch SIGSEGVs during demangling. It recovered from the segfaults by longjmping out of the signal handler and printing a warning message before moving along and pretending that nothing happened. My ears perked up. Those are the exact kinds of things Rust protects us from at compile time! A robust alternative might actually be a boon not just for the Rust community, but everybody who wants to demangle C++ symbols.

Then, later:

Additionally, Ive been running American Fuzzy Lop (with afl.rs) on cpp_demangle overnight. It found a panic involving unhandled integer overflow, which I fixed. Since then, AFL hasnt triggered any panics, and its never been able to find a crash (thanks Rust!) so I think cpp_demangle is fairly solid and robust.

That's what I like to see. Targeted useful reimplementations in Rust that play well to its strengths. In this case, as a double benefit to both the Rust ecosystem and to anyone that wants a robust demangling library.

2
nly 1 hour ago 1 reply      
LLVMs libcxxabi has a demangler which doesn't have the worst looking code in the world[0], has no external dependencies (outside of the C++ standard library), has lately been fuzzed[1] and is tested[2].

Switching languages is cool, but the Rust code is actually longer and still uses a hand written parser, so how can you be sure it is any more correct or won't eat all your memory?

[0] http://llvm.org/viewvc/llvm-project/libcxxabi/trunk/src/cxa_...

[1] http://llvm.org/viewvc/llvm-project/libcxxabi/trunk/fuzz/cxa...

[2] http://llvm.org/viewvc/llvm-project/libcxxabi/trunk/test/tes...

3
pjmlp 2 hours ago 1 reply      
> Linkers only support C identifiers for symbol names.

This is only true of UNIX system linkers, before the FOSS and UNIX clones wave, it was common for each compiler to have its own language specific linker.

4
problems 2 hours ago 2 replies      
How bad are the rules for MSVC compared to the Itanium ones and would you consider adding support for it too?

Not that I have a use case in mind or anything, just curious.

5
jjoonathan 2 hours ago 2 replies      
> These days, almost every C++ compiler uses the Itanium C++ ABIs name mangling rules. The notable exception is MSVC, which uses a completely different format.

You stay classy, Microsoft.

> Its not just the grammar thats huge, the symbols themselves are too. Here is a pretty big mangled C++ symbol from SpiderMonkey [...] Thats 355 bytes!

Here's a >4kB symbol I encountered while liberating some ebooks from an abandoned DRM app:

 tetraphilia::transient_ptrs<tetraphilia::imaging_model::PixelProducer<T3AppTraits> >::ptr_type tetraphilia::imaging_model::MakeIdealPixelProducer<tetraphilia::imaging_model::XWalkerCluster<tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalkerList3<tetraphilia::imaging_model::const_UnifiedGraphicXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, 0ul, 0, 1ul, 0ul, 0, 0ul, 0ul, 0, 0ul, 1ul>, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 0ul> > > >, tetraphilia::TypeList<tetraphilia::imaging_model::XWalkerCluster<tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalkerList3<tetraphilia::imaging_model::const_UnifiedGraphicXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, 0ul, 0, 1ul, 0ul, 0, 0ul, 0ul, 0, 0ul, 0ul>, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 0ul> > > >, tetraphilia::TypeList<tetraphilia::imaging_model::XWalkerCluster<tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalkerList3<tetraphilia::imaging_model::const_UnifiedGraphicXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, 0ul, 0, 1ul, 0ul, 0, 0ul, 0ul, 0, 0ul, 1ul>, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> > > >, tetraphilia::Terminal> >, T3AppTraits, tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, tetraphilia::imaging_model::SeparableOperation<tetraphilia::imaging_model::ClipOperation<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> > > >(tetraphilia::ArgType<tetraphilia::TypeList<tetraphilia::imaging_model::XWalkerCluster<tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalkerList3<tetraphilia::imaging_model::const_UnifiedGraphicXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, 0ul, 0, 1ul, 0ul, 0, 0ul, 0ul, 0, 0ul, 1ul>, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 0ul> > > >, tetraphilia::TypeList<tetraphilia::imaging_model::XWalkerCluster<tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalkerList3<tetraphilia::imaging_model::const_UnifiedGraphicXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, 0ul, 0, 1ul, 0ul, 0, 0ul, 0ul, 0, 0ul, 0ul>, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::OneXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 0ul> > > >, tetraphilia::TypeList<tetraphilia::imaging_model::XWalkerCluster<tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalkerList3<tetraphilia::imaging_model::const_UnifiedGraphicXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits>, 0ul, 0, 1ul, 0ul, 0, 0ul, 0ul, 0, 0ul, 1ul>, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> >, tetraphilia::imaging_model::GraphicXWalker<tetraphilia::imaging_model::const_IgnoredRasterXWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 0ul, 0, 1ul, 1ul>, tetraphilia::imaging_model::const_SpecializedRasterXWalker<unsigned char, 2ul, -1, 3ul, 3ul> > > >, tetraphilia::Terminal> > > >, T3AppTraits::context_type&, tetraphilia::imaging_model::Constraints<T3AppTraits> const&, tetraphilia::imaging_model::SeparableOperation<tetraphilia::imaging_model::ClipOperation<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> > >, tetraphilia::imaging_model::const_GraphicYWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> > const*, tetraphilia::imaging_model::const_GraphicYWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> > const*, tetraphilia::imaging_model::const_GraphicYWalker<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> > const*, tetraphilia::imaging_model::SegmentFactory<tetraphilia::imaging_model::ByteSignalTraits<T3AppTraits> >*)

6
shmerl 1 hour ago 1 reply      
Can it help to implement C++ bindings for Rust?
10
Social Media Needs a Travel Mode idlewords.com
115 points by jmduke  1 hour ago   111 comments top 25
1
tptacek 1 hour ago 5 replies      
To be effective, this would need to become a kind of norm for overseas travelers, the same way traveler's checks used to be. The idea would be that just as you don't carry a bag with your birth certificate, stock certificates, property titles, and jewelry with you, you also don't carry a 10 year archive of every email you've ever sent or a detailed list of every person you've ever spoken to.

In particular, it needs to be normal enough that a significant fraction of all travelers do it. The feature can't be marketed as a protection for at-risk travelers, but as a common-sense safety mechanism useful to all travelers.

I think it's crazy that people walk around with phones that have access to years of email communications, and that even in the happiest timeline we could have ended up on after 2016, features like this are long overdue.

2
tacostakohashi 59 minutes ago 2 replies      
This is completely naive.

Firstly, social media's only incentive is to make your data as widely available as possible (in the interests of ad revenue), and maintain a good relationship with the government in their jurisdiction. Every other existing "privacy" setting on Facebook, LinkedIn, etc is already obfuscated to the point of unusability, for this reason, and "travel mode" would be no different.

Secondly, lets imagine that FB did implement a watertight "travel mode" that hid your embarrassing data effectively while you were travelling. Third parties would just start capturing and storing posts while you have "travel mode" off, and sell that to CBP, or whoever else wants to pay for it.

3
kefka 7 minutes ago 0 replies      
(Read on the screen)

"$Company noticed an intrusion by the US Federal Agents, Border Guards, and have marked this as a compromised account. Please have your designated friend, if set, to authenticate your account."

Its now out of the persons sphere to fix, even if coerced. And companies can defend this by fact of an Acceptable Use Agreement violation itself is breaking a federal law: CFAA.

That, and it seems the only way to stop these issues now is to jam it up in legal limbo by citizens.

4
jethro_tell 1 hour ago 4 replies      
If they get your password can't they just turn of travle mode for you? Or wait until you do the same? I don't know that there is a technical answer here accept wiping your phone/laptop before you go through customes.

The real solution is a political one where we speak up and legeislate and litigate that the 4th amendment applies a the border.

5
hd4 8 minutes ago 1 reply      
This may be a hugely naive question but what could they do if I simply don't use social media?

Do they automatically suspect someone who denies using it and what would follow in that event?

6
rdl 30 minutes ago 0 replies      
I think you could easily justify something like this as a "travel mode" not just for border security, but "in case your phone is lost/stolen while traveling". Make it so you have full or enhanced access to very recent stuff (photos, status updates, etc.) from the trip itself, and don't have access to as much from before the trip. Help defeat localization settings in the place where you're traveling, and get tourist/visitor-specific ads instead of local ads. Value for the user (usability and safety) as well as for the social media network and advertisers.

The other form of this which would make sense: worksafe mode or public mode. If you're logging into your facebook/twitter account from a public computer, perhaps it doesn't have as full and unlimited access, and doesn't have access to non-reversible account actions, and strongly logs out. If you're logging in from a place defined as "work", it doesn't have notifications, certain groups, etc. (the "giving a meeting presentation on your laptop when a racy notification from spouse pops up" problem).

7
mnm1 1 hour ago 2 replies      
It's not like they couldn't make you turn travel mode off if they wanted to. How about pushing for a law closing this 4th amendment loophole at the borders, at least for citizens?

Not that it'd do much. If the border agents really want to see one's social media accounts, I have zero doubt they can get that data from other government agencies. In fact, they probably already have it. It sounds to me like they're just trying to assert their power and dominance over the people whose accounts they are demanding access to as a way to get off on intimidating others. Pretty typical behavior by law enforcement officers the world over.

8
mirimir 19 minutes ago 0 replies      
This isn't very different from refusing to provide login credentials. Sure, turning travel mode on and off could require keys, which are present only on primary devices, which are left at home. So there'd be no need to lie.

However, foreigners would likely be turned back. Because using travel mode is arguably evidence of hiding stuff. And citizens might still be detained. It seems unlikely that they'd send agents to homes, to turn off travel mode. But it's arguably not impossible.

9
Bokagha 39 minutes ago 1 reply      
Why not go further with the idea of a Travel Mode toggle and have it be tied to the device itself. This cuts out any possibility of any data being left on the device from being analyzed easily, social media or otherwise.

Google and/or Apple could add this as a new menu toggle similar to Airplane mode. Once switched on, while in an airport, prevents the device from being unlocked. Then by utilizing geofencing, once the device leaves the airport it unlocks and can be used again.

10
maxerickson 1 hour ago 3 replies      
One thing I wonder about is how quickly a travel mode would become grounds for denial of entry.
11
k-mcgrady 59 minutes ago 1 reply      
I feel it would be quite obvious looking at your device and it's lack of data that such a feature was in force and they would just deny you entry. Nice idea but the problems need fixed in law. IMO technical solutions are just temporary bandaids.
12
thex10 26 minutes ago 1 reply      
Nice idea. But it's treating the symptom rather than the root problem.
13
nicoles 1 hour ago 0 replies      
I'd love to see travel modes take off. I was talking with some friends about how great it would be to be able to switch my login credentials to some sort of shared multiple person-required password for the duration of a flight. Like Shamir's Secret Sharing, but temporarily.
14
binarymax 56 minutes ago 1 reply      
An article with an interesting suggestion and a noble goal but I'm not the first one to say it:

Technology cannot solve the problem we currently face with erosion of privacy at the border! These clever tricks trying to get around the issue only kicks the problem downfield, and likely won't effectively work. If they found out you have travel mode enabled - you may be denied entry or worse (note the comment from @mholt) - they would just detain you until the time lock runs out.

15
rdl 41 minutes ago 0 replies      
I'd been thinking about a similar thing: a "limited access for border guards" mode.

Basically, you can turn over the phone to border guards in a way which gives them access to what is on the phone, but which logs actions, and allows you to easily revert/revoke any changes they make. (This would also be a mode you'd turn over to an employer demanding access).

Potentially this mode might also block access to certain things (secret FB groups, archives over a certain age, some chat logs), but would otherwise be fully functional.

The benefit would mainly be that all actions taken would be logged and reportable, as a way to try to keep authorities from poking in places they shouldn't. It seems they are NOT mostly using forensic imaging tools, but logging in directly on the devices, at least right now, so there would be some value.

16
dustinmoris 17 minutes ago 1 reply      
Social media doesn't need a travel mode. Non US citizen just stop travelling to the US whilst the country is violating human privacy rights at border control.
17
ecopoesis 58 minutes ago 1 reply      
There is no technical solution to this. If you want there to be no searches of your phone when crossing the border, speak to your representative in Congress and your Senator. If they don't listen, then vote for someone else or even better, run yourself.

The only way this is going to change is with a change in the law.

18
mholt 1 hour ago 2 replies      
How do you turn a travel mode off? Once border patrol knows this feature exists and how it works, the jig is up. Any expiration/timeout would just cause you to be detained for the duration the travel mode is enabled. A second password to turn it off would just cause that second password to be coerced out of you. Location-based deactivation can be spoofed.
19
p4lindromica 47 minutes ago 1 reply      
What is actually needed is a dead man's switch: a secondary password that, when entered, destroys the security enclave/TPM to render the device unreadable
20
lj3 32 minutes ago 0 replies      
Didn't Moxie come up with something like this a while back? It was a bit of a non-starter since you had to install a custom version of Android in order to get it work
21
ar15saveslives 22 minutes ago 0 replies      
Why is it a bad idea to have fake FB/Gmail/Twitter accounts?
22
pizza 1 hour ago 0 replies      
Airplane mode: from the end of the microwave peril to the era of alternative inspection, in under 10 years.
23
debt 23 minutes ago 1 reply      
Most Americans don't travel so this likely won't happen in the US; even less travel internationally.

http://travel.trade.gov/view/m-2016-O-001/index.html

24
chinathrow 57 minutes ago 0 replies      
I feel that all technical modes will fail.

Why?

- We first had PIN numbers. Easyily cracked/defeated.

- We then had passwords. Provide them or go back home where you came from.

- We might have travel mode. Defeated or made illegal. Go back home.

I think the only resolution to this is political. Make these searches go away - worldwide - as we near a very bad precedent.

25
vosper 1 hour ago 1 reply      
> We need a trip mode' for social media sites that reduces our contact list and history to a minimal subset of what the site normally offers. Not only would such a feature protect people forced to give their passwords at the border, but it would mitigate the many additional threats to privacy they face when they use their social media accounts away from home.

Border security officer: I see you have trip mode on. Turn it off, and give us the phone, or you're not getting into the country.

Reminds me a little of this: https://xkcd.com/538/

11
Securitybot: Open Sourcing Automated Security at Scale dropbox.com
57 points by hepha1979  4 hours ago   10 comments top 5
1
perlgeek 2 hours ago 0 replies      
The next logical evolution is writing a bot that responds with "yes" and some BOFH-style explanation in your stead when securitybot comes asking.

Which is my way of saying that those automatic queries will likely start to annoy folks very soon, and they'll find a way not to deal with them.

2
benmarks 3 hours ago 1 reply      
Interesting timing given that Netflix announced an open source security offering of their own: http://techblog.netflix.com/2017/02/introducing-netflix-stet...
3
misiti3780 3 hours ago 2 replies      
If I am reading this correctly, I'm a little surprised that Dropbox is sending their security problems through another company's (Slack's) chat system.
4
siliconc0w 2 hours ago 1 reply      
Ideally you can practice immutable infrastructure and avoid running any ad-hoc commands on non-dev systems. Especially administrative ones using sudo. Takes a bit of a culture shift though if people aren't used to working that way.
5
SEJeff 2 hours ago 0 replies      
To get around HN's hug of death taking out the dropbox blog:

http://archive.is/NqfmG

12
Announcing TypeScript 2.2 microsoft.com
180 points by pingec  5 hours ago   71 comments top 7
1
ng12 4 hours ago 6 replies      
Speaking of tooling -- has anyone had a good experience with Typescript in IntelliJ/Webstorm? It does a fantastic job figuring out ES6 code but seems to totally choke on Typescript. I'd like to avoid switching to Visual Studio.
2
pixie_ 4 hours ago 0 replies      
Woo :) whenever I have to work with an old javascript code base now I always add typings to it first. It makes it orders or magnitude easier to work with and refactor after that.
3
bcherny 4 hours ago 0 replies      
Object type, better index access, and the quick fixes are all great additions. Awesome job TS team, keep it coming!
4
aduffy 5 hours ago 3 replies      
My favorite: the new `object` type. Non-nullable and cannot refer to primitives.
5
treehau5 4 hours ago 6 replies      
I know this probably comes up each typescript thread, but I still cannot figure out why I should use TS instead of flow + es6, and maintaining the typing definitions always discourages me as it's one more thing that needs to be kept up to date. Am I working on old information here?
6
ihsw 4 hours ago 0 replies      
Relaxing index-signature access is definitely great as it will make writing code easier. Dynamic properties can lead you to hairy code but it's hard to ignore the reality that it's used everywhere.

For example, no more awkward `e["code"]` where `e.code` is just as valid.

7
codr4life 4 hours ago 1 reply      
Crumbles from the design-table. Oh look, a new object type. I'm not even sure this is an improvement to straight JS any more, it just fails in more exotic ways from pretending to be something it isn't. Once the choice is made to compile to JS, there are plenty of real languages to choose from.
13
Show HN: Get a Slack message when your brand is mentioned on HN littlebirdie.io
90 points by zepolen  6 hours ago   19 comments top 11
1
LeonidBugaev 4 hours ago 3 replies      
Do not wanna be that guy, but you can do that with Zapier and way more https://zapier.com/zapbook/zaps/675/post-message-slack-when-...

We (tyk.io) use Slack notifications with Zapier for various searches on HN, Reddit, Twitter, Stackoverflow or even Google Alerts (the last one use built-in Slack `/feed` feature).

2
ngokevin 14 minutes ago 0 replies      
Mention (https://web.mention.com/) is also another social mention tracker. I use it for an open source project and would recommend it.
3
nickstinemates 5 hours ago 0 replies      
Have been using https://www.hnwatcher.com/ for more than 3 years. Love it.
4
zepolen 6 hours ago 0 replies      
This was made for my own and a friend use, and we found it useful, maybe others will find it useful too.
5
tlrobinson 39 minutes ago 0 replies      
There's also https://notify.ly/ which includes "dozens of sources, like Twitter, Facebook, Google+, Reddit, Blogs, News, Medium & Product Hunt."
6
koolba 5 hours ago 1 reply      
Free product idea: Same general concept but applied to the content of sites that make the front page.

In your Bitcoin example, one would be notified if the displayed content of an article on the front page (or say top N, or greater than M points, ...) includes the word "Bitcoin".

Bonus points if you can do complex or negative matches. Ex: +database -mongodb

7
fridaa 4 hours ago 0 replies      
F5Bot[1] is a free alternative that sends an email instead.

[1] https://f5bot.com

8
tyingq 5 hours ago 1 reply      
Cool. Is there a blog post somewhere that shows how it works internally? I'm curious if there's some formal named entity recognition, or more basic string matching. Would it work well, for example, if you had a brand name that was also a common term?
9
guillegette 5 hours ago 2 replies      
Another suggestion: Send me the posts in the front page that have more than 100 (or any amount) votes. Or maybe a daily message of yesterdays front page.

Thanks!

10
marcinkuzminski 6 hours ago 0 replies      
This looks nice ! i was missing such thing to search for hacker news mentions of our company.
11
rrggrr 4 hours ago 1 reply      
Would be great if we could select source (eg. not have Reddit)
14
Launch HN: Bitrise (YC W17) Continuous Integration and Delivery for Mobile Apps
100 points by thebloodrabbit  4 hours ago   32 comments top 19
1
joeblau 3 minutes ago 0 replies      
I've been using Buddy Build and I really enjoy their environment. How does this compare to that?
2
baronofcheese 4 hours ago 1 reply      
I love bitrise, especially since it is free for OSS projects. This means I don't need a build server at home for iOS, whenever I need to build and release my iOS libraries. It is fast. It is flexible and gives you the possibility of making your build pipeline using their blocks and other OSS blocks around on GitHub. You can use build scripts such as Cake, Fake and many more. More importantly the guys behind bitrise are very helpful when encountering issues.

Good job so far!

3
christop 20 minutes ago 1 reply      
Do the Docker containers that builds run in still provide root access to the Docker socket, allowing people to break out of the container?
4
ppostigo 3 hours ago 1 reply      
Great product! I was indeed waiting for something similar to appear in the market. I run an app creator, and app building and submission is a big pain.

Fastlane solves part of the problem, but I think you solve it all.

By the way, I've found this tweet: https://twitter.com/wercker/status/431757646333751296 Were a similar company explains that a service like this one is not allowed by Apple's EULA: http://store.apple.com/Catalog/US/Images/MacOSX.htm

Did you have any problem with Apple in the past?

5
viktorbenei 4 hours ago 0 replies      
Just some notes for the CLI:

- It's the same runner which is used for running the builds on https://www.bitrise.io/

- You can run the same build config (YML) with it both on https://www.bitrise.io/ and on your Mac/Linux

- CLI [home page](https://www.bitrise.io/cli) | [docs](http://devcenter.bitrise.io/bitrise-cli/) | [GitHub](https://github.com/bitrise-io/bitrise)

(disclaimer: CTO here)

6
grivescorbett 1 hour ago 1 reply      
Bitrise is a wonderful product. Last year a close friend and I formed an LLC and built an iOS app on contract. There was an insane number of things that we had to figure out from scratch, but our CI Just Worked and allowed us to easily send install links to our client.
7
hamstercat 1 hour ago 1 reply      
Bitrise is pretty great. CI for iOS and Android apps have never been easier. I'm currently on the free plan, but will gladly pay for a pro plan when/if my apps become profitable.
8
jclardy 4 hours ago 1 reply      
I've used bitrise for about the past year as an iOS dev at a fairly small company. I really have no complaints with the service. Super easy to setup and maintain an app on there. Also a ton of integrations (Slack, cocoapods, carthage, hockey app.) Our current setup is pretty simple, primarily creating an archive and posting a link to a slack channel, but we are hoping to start getting some tests running on there soon.
9
mparis 49 minutes ago 0 replies      
Congrats on the launch guys! Bitrise is THE CI tool for iOS apps.
10
vytis 1 hour ago 1 reply      
Could you do a short comparison to what CircleCI is offering for iOS and what is Bitrise's advantage?
11
nicoles 1 hour ago 0 replies      
Congrats Bitrise team! I super love your product, and I've really enjoyed using it for my iOS projects!
12
msencenb 4 hours ago 2 replies      
Does the CLI work on ubuntu, or just mac?

I run my current CI/CD pipeline through jenkins for Rails, and I would like to keep everything running through jenkins. Is there a way to do this currently with bitrise?

13
scosman 4 hours ago 0 replies      
We use bitrise for iOS CI. Nothing but great things to say about them!
14
martinald 3 hours ago 0 replies      
This is a great product - by far the best solution to mobile CI I've seen. Serious congrats into getting into YC guys!
15
hesamk 2 hours ago 0 replies      
Bitrise is awesome.I have worked with Travis, Jenkinse and Circle and I do prefer Bitrise among them.

Keep Fucking Fire!!!

16
ericfr11 3 hours ago 0 replies      
love Bitrise. It addresses all the needs for flawless iOS and Android CI, from build triggers to customized variables, and push to app stores.
17
macdrevx 2 hours ago 0 replies      
Bitrise rocks my world!
18
alexprice 1 hour ago 0 replies      
I love Bitrise. Revolutionary!!!
19
alimoeeny 4 hours ago 1 reply      
Can you elaborate on what you have that Travis does not?
15
Mailgun Becomes an Independent Company mailgun.com
187 points by kikki  9 hours ago   74 comments top 28
1
pm90 7 hours ago 2 replies      
Thus begins the piecemeal partitioning of Rackspace. It was pretty obvious that would be the ultimate result after being taken private by Apollo. To be sure, there was much inertia within the company to let go of services that weren't profitable/weren't the future of the company anymore even before the acquisition.

All the while, you have a leadership that continues to say "Everything is OK! This is good news overall for the company! Let's continue to work hard in solidarity with each other! We're one big happy family!" Its important for people (in general) to see past that kind of management bullshit, look at the numbers (profitability, customers etc.) and make a very informed decision about their future, lest they get caught unaware by "restructuring".

Disclosure: Obviously, a former Racker. Loved the peers I worked with and the company culture. Management was (is?) a total shit-show. I was lucky enough to see the future and take a better job before the company went private.

2
morrbo 7 hours ago 3 replies      
I agree, Mailgun is fantastic. However, their lack for 2 factor authentication is seriously worrying.

The only other issue that I do have with them is that the "history" for messages has a really stupid encoding. Basically, when a message does fail, or get marked as spam, we have a web hook set up. This works great. However, when looking at the message source, it has a dumb encoding on carriage returns, and colons. It's not the biggest issue, but still annoying.

We ended up making a little appliance to resend failed emails for our sales guys. Basically, this had to have the code "Replace("=\r\n", "").Replace("=0D=0A", "").Replace("=3D3D", "").Replace("=3D", "=");" instead of just being a straight copy paste. Ideally we could have a "resend" button. In the console.

3
bretpiatt 5 hours ago 1 reply      
I'm a Mailgun customer and CEO of Jungle Disk (that I bought from Rackspace a year ago). As a Mailgun customer I'm really excited about this. For comparison at Jungle Disk we've already doubled the R&D investment in our first year and will continue to grow it over time. I believe the Mailgun team will be able to really accelerate what they're able to do too. Secondly, I'm happy to have more neighbors moving to our area of town!
4
brightball 7 hours ago 2 replies      
Mailgun's a solid service. They've got the best inbound handling of all of the services out there in terms of letting you apply rules before the messages hit your servers.

I haven't looked at their outbound service in a while because I've been so impressed with Sendgrid's dual DKIM CNAME setup so that they can handle automatically rotating your DKIM keys without bothering you...that it's really hard to even think about trying somebody else.

5
bambax 46 minutes ago 0 replies      
I really, really like Mailgun! and can't understand why they don't get more "press" or mentions on the forums, etc. It seems only their competitors are ever talked about.

Yet with Mailgun everything always works, the API is super simple and so is integration. And their free tier lets you handle 10,000 msg per month!!

Maybe they can use some marketing, because the product is great!

6
ChefDenominator 7 hours ago 2 replies      
Maybe this means Mailgun lists feature will be friendly to DMARC? It was such a great solution for domains not really requiring a mail server for inbound mail, that is, until everyone started caring about p=reject (well, except for Gmail - I'm still not sure how much my mail has to not conform to standards before they refuse to deliver to even a spam folder).
7
mrmch 58 minutes ago 0 replies      
This is awesome news for Mailgun customers, kudos to Will and the Mailgun team for pulling this off.

Mailgun has one of the best inbound/outbound API combinations available, great for companies with a strong developer team.

8
leetrout 3 hours ago 1 reply      
I see a lot of comments about alternatives and such. https://postmarkapp.com/ is really nice. I've used them for years in all of my personal projects and it has always Just Worked.

I've avoided MailGun & SendGrid entirely for various reasons.

9
fasteo 6 hours ago 1 reply      
Congrats from a happy - and paying - customer. Sending around 500K transactional emails/month without an issue.

Hope they add support for tracking against SSL/HSTS sites.

10
otto_ortega 3 hours ago 1 reply      
Best of luck, I all ask/need from them is to keep their free tier available, don't be another mandrill please!
11
AlexB138 8 hours ago 1 reply      
Rackspace spun off both Cloud Sites team and Jungle Disk last year, along with shutting down a team or two. Seems they're focusing on slimming down.
12
arenaninja 2 hours ago 0 replies      
Having used both Rackspace and Mailgun and had a great experience with both I'm sad that it's come to this.

Good luck to Mailgun! DFTBA

13
kuon 8 hours ago 2 replies      
I moved to mailgun after the mandrill "incident" and I have been very happy since. I hope the service will stay stable!
14
scandox 6 hours ago 0 replies      
> prior to closing

This gave me a mini-heartattack...I guess I don't think about VCs and investments as much as the average reader of these things.

15
eykanal 7 hours ago 2 replies      
> If you are a current customer, you remain in good hands. Nearly every Mailgun employee and all of leadership is continuing with the new organization and excited about the mission ahead of us.

...except Bob over there, but he's kind of a grumpy old man anyways, so you can just ignore him.

Pro tip: Just write "The team is very excited" or something like that, its saves that tiny bit of awkwardness.

16
meesterdude 5 hours ago 0 replies      
i've used mailgun for several high-volume clients and overall it works - but the feeling I got when working with their API, their docs, their UI and their support is a product in maintenance mode. There is a lack of polish, a lack of effort, and a lack of overall robustness. But, it works.

I hope the spin off lets them focus on improving the service and the surrounding experience - I definitely would rather they succeed than go belly up. As it stands if someone pops up with similar offerings, I'd definitely check them out. But mailgun isn't impossibly far off from creating an exceptional product. The question is: with this change, will they?

17
kikki 8 hours ago 3 replies      
I'm a big fan of Mailgun, but haven't actually sent enough mails in a month yet for them to actually charge me. I wonder what % of their users are the same?
18
harrisreynolds 3 hours ago 1 reply      
I hope this helps them improve. The lack of decent error messages and support caused me to switch to SparkPost.
19
tarikozket 7 hours ago 1 reply      
Why the post is signed by "CEO of Mallgun"?
20
leesalminen 8 hours ago 0 replies      
I'm glad they learned the importance of support from RS and internalized it. I have noticed an increase in quality of support over the past couple months.
21
razin 7 hours ago 3 replies      
Can anyone shed some light as to why companies might choose to spin out a division into a separate entity?
22
CodeWriter23 5 hours ago 0 replies      
This is great news and puts my Mailgun-related fears about the Rackspace acquisition to rest. I look forward to actually paying for Mailgun's service soon.
23
pedroborges 8 hours ago 0 replies      
Congrats guys!

- From a happy customer.

24
laktek 7 hours ago 2 replies      
Has anyone done any comparison between API based email services like AWS SES, Mailgun and SendGrid?
25
ferrantim 8 hours ago 0 replies      
Congrats to the entire Mailgun team. I'm looking forward to seeing all the awesome stuff you do an an independent company!
26
fmariluis 8 hours ago 0 replies      
I'm a happy customer and I'm really glad to hear this.
27
ericcholis 4 hours ago 0 replies      
Next up, Object Rocket?
28
Animats 3 hours ago 0 replies      
Does this mean we can now identify and block Mailgun's IP block to reduce spam?
16
Bootstrapped A Python library to generate confidence intervals github.com
47 points by jimarcey  4 hours ago   10 comments top 4
1
petters 3 hours ago 1 reply      
There are better algorithms for bootstrap intervals that you perhaps should look into. Better in the sense of quality, not speed.

Google e.g. "interval BCa"

2
Sauliusl 3 hours ago 1 reply      
Neat!

OP, how does this compare to scikits.bootstrap [1] feature/performance-wise?

[1] https://scikits.appspot.com/bootstrap

3
pbnjay 4 hours ago 3 replies      
It's a nice wrapper on a powerful technique. Could be very useful to some folks - but requiring numpy and pandas is kind of excessive.
4
startupdiscuss 3 hours ago 0 replies      
I am 87% confident that this has a 74% change of hitting it off with the HN community.
17
Common Multithreading Mistakes in C# Unsafe Assumptions benbowen.blog
174 points by douche  11 hours ago   68 comments top 17
1
exDM69 10 hours ago 2 replies      
Nice writeup, and most of this isn't specific to C#, but applies to native code (or Java) as well. These are typical misconceptions of people who have not studied concurrecy.

What is specific to C# (applies to Java too) is having an implicit mutex/condition pair in each object. I think this is a terrible design mistake because it's not very practical and it's confusing to newbies (a big stumbling block for students when I studied concurrency at the uni).

It's not very practical because in most concurrency tasks I've dealt with (not using C# or Java), there's typically more conditions than there are mutexes (typically 2-4 + O(n) conditions per mutex). In Java/C# land, the typical solution would be to have a complex conditional expression, all the threads spinning on a .wait() there and then over-use of .notifyAll() instead of .notify() causing lots of spurious wakeups and wasting precious cpu cycles.

It's confusing because of the reason above, it's easy to go "I'll solve this by adding another mutex". Unfortunately this is seldom the correct solution (to any problem), and a much better result would be achieved adding more condition variables to wait on.

I wouldn't mind too much about a design mistake in a language I almost never use (Java/C#) if it wasn't the de facto language for learning about concurrency at universities. This has produced so many engineer with a twisted view on concurrency. I understand that "Java is easy" and "C is hard" but when we're already talking about memory models, multi-core, cache coherency and atomicity, C-the-language isn't the hard part and Java-the-language does very little to help with those parts.

2
ibgib 3 hours ago 1 reply      
I enjoyed this article. I started concurrency with Delphi 5 using critical sections, mutexes, semaphores, etc. Then I started using C# which was a major upgrade, and I loved having the simplicity of things like the `BackgroundWorker` class and the `lock` statement which certainly beat the need to write my own multi-threaded class for every little thing in Delphi. I recall some of article's mentioned atomicity problems, and I would have to juggle when to use Interlocked vs locks, etc. For example, it mentions floats and hardware-specific issues with 64-bit ints, but from what I recall I treated booleans as atomic and not needing Interlocked or locks, but floats I did. That CPU caching detail level though sounds mighty insidious. I also forged ahead and used TPL, async/await, Rx (Observables are pretty awesome)...

But now I live on Erlang's BEAM (via Elixir) and I freaking love it. The real gain for me is that I found I didn't need mutexes, locks, critical sections, etc., because the super lightweight thread-like Erlang processes (not OS processes) themselves run in parallel but each one runs in a single-threaded manner. This effectively turns each process itself into its own critical section, and it's this aspect that I personally have found extremely valuable.

3
sharpercoder 9 hours ago 3 replies      
The more threading code I have seen, the more convinced I become that it should be avoided at all cost. The pitfalls are nonobvious and potentially incurring datacorruption.Rather, they should be behind understandable abstractions (like await/async and Task/Task<T>, but those have their own pitfalls) where applicable. Also, I can see value in an "unsafe" block (e.g. a "threading {}" keyword) where the writer of code communicates to the reader "inspect this with special threading awareness"
4
jaegerpicker 2 hours ago 0 replies      
It's a cool series. Mostly review for me but I certainly passed it to my team to review. Ultimately I say that 99% of the time the correct answer is to not use raw threads and never share memory. There are much better concurrency and parallel patterns out there. Akka(.net in this case), Coroutines and Channels, Erlang Actors, Clojure's core.async, .net's async/await even. Honestly if you are reaching for threads you probably should ask yourself if some other way would be better and safer.
5
achr2 8 hours ago 1 reply      
It's odd to have a whole section on coherency and never mention the built-in `volatile` keyword who's entire purpose is to indicate that a variable requires fresh reads..
6
mathw 9 hours ago 3 replies      
I'd like some more explanation of why the author thinks it's better to use a lock statement than an Interlocked method most of the time.

This is really just fundamental concurrency stuff though. Something which is sadly in short supply in some people's skill sets, but then I guess not everyone spent four years working on massively multithreaded C++ software early in their career like I did.

I'd really prefer it if C# made you share memory explicitly - default shared memory concurrency is just asking for trouble, in this and many other languages, because you have to do extra to do things right, rather than extra to do things wrong.

7
cm2187 9 hours ago 2 replies      
I learned something about Thread.MemoryBarrier().

I expected the following to be safe:

 Dim V(99) As Integer Parallel.For(0, 100, Sub(i) V(i) = i End Sub) Dim Result = V.Sum
If I understand correctly it looks like I need to flush the memory before accessing V:

 Dim V(99) As Integer Parallel.For(0, 100, Sub(i) V(i) = i End Sub) Threading.Thread.MemoryBarrier() Dim Result = V.Sum

8
gnur 9 hours ago 1 reply      
I know there is a website that has gamified certain race conditions. There was a interface that allowed you to step through code in 2 "threads" and the goal was to defeat the enemy army though deadlocking their code.

Does anyone know where it is? It was very informative..

9
manigandham 4 hours ago 0 replies      
Always use Interlocked over a lock statement if you're just dealing with incrementing numbers. It's 1-line, faster and always safe. Author is mistaken for advocating a lock in these situations.

Only need lock() and the full Monitor classes if there's more to be done within the locked statement.

10
jsingleton 10 hours ago 0 replies      
Nice post. Simply written and easy to understand.

There are two others:

- http://benbowen.blog/post/cmmics_i/

- http://benbowen.blog/post/cmmics_ii/

It's well worth learning about the Interlocked class if you want to do parallel programming. If you get it wrong then you will see incorrect/corrupt results. It's also worth keeping in mind that for simple tasks it can be quicker to do them single threaded, due to the overheads involved. I demonstrated this in a simple benchmark app [0] that I wrote for a chapter of my book on ASP.NET Core [1].

[0]: https://github.com/PacktPublishing/ASP.NET-Core-1.0-High-Per...

[1]: https://unop.uk/book/

11
sidlls 7 hours ago 1 reply      
These are common mistakes novices to multithreaded programming make, presented and explained well. Very nice.

In my experience another mistake made by novices and journeymen alike (at least occasionally by the latter) is to reach for this tool without careful consideration of whether it's even necessary.

12
Pxtl 5 hours ago 2 replies      
The one that caught me recently was assuming that you could use a Dictionary as a concurrent cache.

You can't. Dictionary can be read concurrently, but once you start writing to it concurrently all bets are off. ConcurrentDictionary implements IDictionary and allows concurrent writes.

Also, Entity Framework. We had an icky bug coming from somebody storing the datacontext in a member variable. Don't do that.

13
novaleaf 6 hours ago 2 replies      
the biggest gotcha I've seen with C# multithreaded is the usage of the basic .NET Collections: Dictionary<K,V> and List<V>

they will mostly work (like multiple threads reading, or 1 writes while another retrieves), except when two threads try to simultaneously write, or when 1 writes while another enumerates.

When that happens, all bets are off, and you will silently get obtuse errors such as null return values or enumerating past the end of the collection, or a corrupt collection that throws exceptions from other (seemingly) random and unrelated areas.

I believe the new concurrent collections take care of this, but it is still so easy for beginners to shoot themselves with async programming. very much in stark contrast to how dev-friendly the rest of the CLR is.

14
Pigo 9 hours ago 2 replies      
I've often wondered what the real gains are for using async/await methods inside a web api call per say, especially when it's a single task being performed, like a query. I've tried to setup concurrent queries, and do whenall or parallel.foreach. But I think I always hit a road block because each query has to be from a separate context for that to work.
15
samfisher83 5 hours ago 0 replies      
I think I learned this during learning Peterson's algorithm. The professor emphasized the flag had to be a single operation variable.
16
jorgeleo 5 hours ago 1 reply      
Interesting is also that these recommendations also apply to the use of the async/await pair
17
jimmaswell 7 hours ago 0 replies      
Not working very well in mobile: https://imgur.com/xPn8YLc

A simple table would have worked fine.

18
Humble Book Bundle: Arduino and Raspberry Pi humblebundle.com
57 points by triecatch  3 hours ago   6 comments top 3
1
EnFinlay 23 minutes ago 0 replies      
I just want a loose leaf printed version of all the books I've bought through Humble Bundle.
2
ptrptr 1 hour ago 1 reply      
IMO highiest tier should include Raspberry Pi Zero.
3
triecatch 2 hours ago 2 replies      
Is anyone familiar with any of these books? I'm particularly interested in the FPGA book but can't find many reviews of it (most say that it's shallow, but don't go into whether the content that's there is good).
19
ECMAScript 2016+ in Firefox mozilla.org
233 points by robin_reala  13 hours ago   71 comments top 9
1
cdnsteve 10 hours ago 0 replies      
Async FunctionsStatus: Available from Firefox 52 (now Beta, will ship in March 2017).
2
nailer 12 hours ago 6 replies      
I can see that strict mode inside a function using default parameters should throw according to the spec ( https://tc39.github.io/ecma262/#sec-function-definitions-sta...) but does anyone know why? Is strict mode something we should now be avoiding?
3
M4v3R 12 hours ago 3 replies      
Good to see such quick progress in this area in major browsers. It's worth noting that WebKit also has 100% support for ES 2016+. So now only Edge is lagging in this regard.

[0] http://kangax.github.io/compat-table/es2016plus/#webkit

4
hatsunearu 6 hours ago 0 replies      
I like how they added (what is essentially) left pad. Hah!
5
rocky1138 8 hours ago 4 replies      
Do any of these changes make the language objectively better? All I see are nice-to-haves that aren't really required.
6
shadowmint 10 hours ago 11 replies      
Realistically, when are we going to see the modules situation resolved?

ES2017?

ES2018?

I know, I know, just use a transpiler and emit a bundle... but really?

It's been a draft since 2015, and no browsers have any support for it yet, despite full support for the rest of the standard?

Are modules really that controversial?

I'm kind of disappointed, honestly, that despite all the progress in the ecosystem, this long standing issue still remains mysteriously unsolved, by anyone.

If you're using a transpiler anyway, who really cares if you have native support for the language features?

7
scanr 5 hours ago 0 replies      
I'm unreasonably happy about Async Iterators being on the roadmap. Makes it much easier to write imperative asynchronous streaming code.
8
SimeVidas 5 hours ago 0 replies      
Async iteration is already at Stage 3? Nice.
9
andrzejsz 9 hours ago 1 reply      
I wonder does ECMAScript 2017 has optional name ES8?
20
The Impact Github is Having on Your Software Career medium.com
208 points by kumaranvpl  10 hours ago   164 comments top 46
1
KiwiCoder 7 hours ago 13 replies      
I do a lot of recruiting, for both paid and volunteer coding roles. I've been hiring for about 13 years, and I've been coding professionally for about 25 years. Before that I coded as a hobby, from about age 11.

Speaking from this experience, and as someone who reviews on average 20-50 coder profiles a week, the public commit history of a coder is almost never a significant factor. I don't see any trends that indicate this is changing, either.

The vast majority just don't have much to show, having spent their years working behind walls on closed software.

Instead of relying on a public portfolio that in most cases won't exist, I rely on talking to these people directly, programmer to programmer. If we can code together, on the actual code they would be working on, that's about as good as it gets.

In other words, I rely on my experience as a coder to help make what are, ultimately, subjective judgement calls.

2
cableshaft 8 hours ago 5 replies      
I think this guy feels this will be true because that's the bubble he lives in (an open source bubble). His bubble is not reflective of the entire software development community, however.

Sure, it'd be nice if everything I worked on was available open source, but let's just look at one example where that's not a good idea: video games.

Good luck getting the game industry to make their AAA games open source, especially before release, when hundreds of people are chomping at the bit to have a dirt easy time of cloning whatever game out there and tossing it on the app stores to make a few bucks off their broken, half-stolen mess of a game. It's a rampant problem right now (for example, 2048 and Flappy Bird both had open source versions, possibly unofficial... look how many clones of those games hit the app stores).

Not to mention, game development often is a creative one where features are tried and then have to be cut for time, while audiences assume that anything they see will be in the final game, and if not they were maliciously lied to. This leads to any information about the game being closely guarded except for planned waaay in advance marketing campaigns, for most games (indies is a little different, indies need whatever exposure they can get usually).

So no, not everyone is going to be working on open source software, and that's not going to change in the 2 year 'Mark my words!' timeline this guy has provided.

3
sqeaky 3 minutes ago 0 replies      
I agree that in principle a strong portfolio is a great way to communicate software development skill. What I don't understand why he feel "now" is different.

Github isn't exactly new and there is plenty of time for others to innovate in this space. Why haven't we seen this effect already? If it could happen fast it would already have happened. Why won't it be a slow taking a decade or more like the proliferation of Social networks?

4
projektir 8 hours ago 2 replies      
This way of looking at things is so corruptive. Does everything we do have to be about reputation? About our CV and how we're hired? About competition? How many commits does it take to compensate for that state of mind, for what it does to people? Whenever you bring those things, they only leave a scorched plain in their wake and the community is forced to move somewhere else.

> One of the principles of open source is meritocracythe best idea wins, the most commits wins, the most passing tests wins, the best implementation wins, etc.

Have we learned nothing?

"When a measure becomes a target, it ceases to be a good measure." - Goodhart's law

5
Nekorosu 7 hours ago 0 replies      
The opinion represented in the article is based on top of several false assumptions.

1. GitHub represents the major part of software development world.

No. It's a tip of an iceberg. I don't think I have to elaborate this one.

2. Open source software development model is an absolute winner.

No. It's just one model of development which fits some kinds of projects better than the others. Companies tend to open-source tools. Product's code is usually closed source. Obviously there are a lot of developers who spend most of their time on closed source projects. Some of them also have life.

3. Your open source projects contribution helps recruiters to evaluate you performance for any kind of software development project.

Not true. A lot of jobs are either legacy projects work, niche technologies work or both. Some niche technologies are closed source. Some legacy technologies predate open-source culture. The problems you face doing open source projects are very different from those you face doing b2b, b2c projects.

Having said that the author's opinion looks invalid to me.

6
ENOTTY 8 hours ago 6 replies      
Disagree totally. There are gazillions of software developers quietly plugging away at gigantic companies with over 50k headcount whose contributions will never make it into open source. Think business intelligence software, accounting software, embedded systems developers, etc.
7
superzadeh 6 hours ago 0 replies      
> About me: Im a Legendary Recruiter at Just Digital People; a Red Hat alumnus; a CoderDojo mentor; a founder of Magikcraft.io; the producer of The JDP InternshipThe Worlds #1 Software Development Reality Show; the host of The Best Tech Podcast in the World; and a father. All inside a commitment to empowering Queenslands Silicon Economy.

The Software industry needs more humility and less magicians.

I believe that strong engineering skills are worth far more than a Github profile, and good recruiters will know how to spot this.

If however you are looking to be hired as the new "Rockstar developer" of the latest trending startup running buzzword architecture, then a good public brand might be useful indeed.

8
samuli 8 hours ago 0 replies      
At least in Finland (don't know about the rest of Europe) the information must be collected from the applicant and not by doing internet searches [0]. If a company wants to use any other sources, they must ask the applicants permission first.

[0] http://www.finlex.fi/en/laki/kaannokset/2004/20040759

9
peteretep 7 hours ago 0 replies      

 > Over the next 1224 months >in other words, between > 2018 and 2019how software > developers are hired is > going to change radically.
Just in time for Linux on the desktop

10
jurassic 7 hours ago 3 replies      
I recently read a novel called "A Fine Balance" by Rohinton Mistry. One character, Ashraf Chacha, is a man who works as a tailor. He is extremely grieved by the death of his wife. He talks about how when she was alive it never bothered him to spend all of his time reading or sewing - just knowing she was there in another room was enough. But after she dies, he regrets all the time he foolishly spent away from her.

I don't want to look back on my life with that feeling. As adults, we have very little free time and spending it on software seems like a gross misallocation of resources, even if our careers would benefit. There's more to life than work.

11
donretag 7 hours ago 1 reply      
All the work you do here will be in the open. In the future you wont have a CVpeople will just Google you.

Interviewer: I see you have a impressive profile on Github, but can you write an algorithm for quick sort on the whiteboard?

12
nappy-doo 7 hours ago 0 replies      
Would you like to be judged by the number of lines of code you submit? It's not much different than being judged by your commit graph on GitHub. If we judge ourselves, and others by this ridiculous metric, we're cheapening our profession.

We are knowledge workers. We work by thinking and acting. A graph captures none of the first. The most valuable code I've ever written for an employer was about 100 lines that took 2 weeks to write. Would the commit graph capture that?

I have gotten into arguments with my boss about hiring based on commit graphs and commit counts. I lost. We hired someone who checks in a lot of stuff, but often has to fix it with other changes. His graph and change count looks great. It's a nightmare working with him.

In my experience, commit graphs are bogus.

13
doodpants 7 hours ago 0 replies      
15 years ago, you went to Freshmeat or SourceForge to find open source projects. Now it's Github. In 5-10 years, a different open source repository might be more popular.

The author seems unconcerned about a particular company having so much control. Note that he compares not having a Github account to not having email or a cellphone in general, rather than to not having e.g. Gmail or an iPhone in particular, which would be a more apt analogy. It's a disheartening reminder of the trend of the internet moving away from decentralized protocols to centralized services.

14
pyb 7 hours ago 1 reply      
That chestnut is getting a bit old now.

For those of us who do have open source code on Github, the consensus is clear : despite the hype, most employers don't really look at Github in any depth. They mostly prefer to grill candidates, rather than look at what they've actually done. I guess it must be more enjoyable.

15
emodendroket 6 hours ago 2 replies      
> For software developers coming from a primarily closed source background, its not really clear yet what just happened. To them, open source equals working for free in your spare time.

> For those of us who spent the past decade making a billion dollar open source software company however, there is nothing free or spare time about working in the open.

> [...]

> The way to get a job at Red Hat last decade was obvious. You just started collaborating with Red Hat engineers on some piece of technology that they were working on in the open, then when it was clear that you were making a valuable contribution and were a great person to work with, you would apply for a job. Or they would hit you up.

Actually, this sounds exactly like working for free. You do some spec work and if they like it maybe they'll hire you.

16
joshstrange 7 hours ago 2 replies      
> Previously privileged developers will suddenly find their network disrupted.

Good god.... I love GitHub and what it allows for but let's not paint this as the "commoners" knocking down the walls that protect the "elites" in their ivory castles. Commiting to GitHub does not, on it's own, make you a better developer or a better person. In the same way that not committing to GitHub does not, on it's own, make you worse developer or a worse person.

I've met people who don't have single commit to a public GitHub repo that are FAR better developers (and people in some cases) than ones I've know that are deeply ingrained in a public project. In some, not all, cases there are different skillsets required for OS work vs working at a "closed source" company. And while we are on that subject I'm quite sick of some of the attitudes I see on HN sometimes around "closed source" like it's some evil thing. It's not, people have to eat, have to pay rent, have to support their families and anyone that thinks you can do easily that while working for a for-profit company are living in a dream world. How many times have we seen posts here about how little money is actually given to OS? Sure I'd love to be able to only work on OS or work for a company that is open but there are very few success stories in that area. The cognitive dissidence the HN crowd expels when they fawn over the SasS companies or various other SV unicorns (all closed source) here and then derides closed source is astounding.

All of this to say the message of "Your GitHub profile will mean everything for your career" is simply bullshit. CAN it give you a leg up? Of course, but experience working for a company and in that kind of setting is much more valuable to the vast majority of companies. Take a look at Linus Torvalds, undoubtedly a genius and a person who has done immense good for the OS community, however his temper/attitude are legendary with HN periodically posting links to his smack downs on mailing lists or the like. That said do you think many companies are looking for that kind of an employee? I think not (No disrespect meant to him at all, it works for him and I doubt he is looking for a job at those kind of places anyways).

Most of the companies I've worked for or interviewed with have either not cared at all about my GitHub profile or even if they say they do care they don't give it more than a passing glance at best. Focus on being good at what you do and if that happens in public on GitHub/GitLab/etc then all the better but don't bend over backwards to make your profile look good at the expense of actually knowing your shit.

17
legostormtroopr 38 minutes ago 0 replies      
This is such a double edged sword - GitHub as the central node in the network of trust, means GitHub has unprecedented control over developers in that network.

Someone doesn't like you or your repo, and can convince GitHub to evict you - you're off the network, and all your contributions are gone.

18
kabdib 37 minutes ago 0 replies      
I read a lot of resumes, and interview a lot of developers.

The times I've looked at Github it was either "meh, this doesn't tell me much" or "Holy shit, run away". And about 5X in favor of running away.

So it's been useful, in a sense, but not as a positive signal.

19
prh8 8 hours ago 0 replies      
This almost seems like more of a hype article for Github than reality. It's certainly an extreme version of some truths (good and bad), but this is not how things are going to become for developers, certainly not as soon as 2018-2019.
20
rampage101 1 hour ago 0 replies      
The vast majority of programmers are not competing for magic internet points. It was a few years ago there was this massive craze to create good looking Stack Overflow and GitHub profiles. Most people are not giving away their time for free... if they are contributing to open source it usually benefits their company or hobby project somehow.

Another issue is what happens to GitHub if they are not successful as a company. There was a story a few months ago here about how the company itself is losing money. How much will your profile be worth if GitHub goes out of business?

21
d--b 6 hours ago 1 reply      
Here is some career advice: do not describe yourself as a "JavaScript Magician, Story-teller, Code DJ". That really doesnt play well to a recruiter who is looking to put together a team of people who like to work together.
22
bjornlouser 6 hours ago 0 replies      
"Its not your code on Github that countsits what other people say on Github about your code that counts."

In the near future all of our problems will be solved by minor celebrities of fashionable Github repos.

23
isometric8 8 hours ago 3 replies      
I saw the reality of this on a job posting the other day, the recruiter wrote that applicants with github work will be given preference. This is pretty stupid really, a lot of really good developers are working on projects that cannot be shared or discussed or open sourced. Just because you haven't had the time between 1am and 3am to contribute to an open source project doesn't mean you suck as a developer.
24
dajohnson89 8 hours ago 0 replies      
Towards this end, is there a place that matches developers with small projects needing just a small amount of help? Say, a few small bug fixes or enhancements? I'd be happy to contribute but I don't want to commit too much of my time nor do I want to navigate the maze of a larger project.
25
nmeofthestate 4 hours ago 1 reply      
If a recruiter checks my github they'll find a project to internet-enable some bathroom scales.

Hire or No Hire?

26
neoeldex 8 hours ago 1 reply      
A self proclaimed "Legendary recruiter" Kssshh
27
jasonpeacock 6 hours ago 0 replies      
He writes about network of connections, 1st, 2nd, 3rd degree and so on being lost when you change companies, but that is incorrect.

Enough people I know and respect have moved on to other companies that my own network has actually grown. If I chose to apply elsewhere the first thing I'd do is reach out in my own network, not spam recruiters with my Github account.

Your Github reputation can be used in addition to your network, or as a substitute if you have no network, but it will not replace the impact of having others already in the industry/company personally vouch for you.

"Who you know" is still the best way to get your foot in the door, because we are all still social animals.

28
ivan_gammel 3 hours ago 0 replies      
The main logical failure of this article is the wrong assumption that the code published some time ago does always reflect current skills or relevant to the current needs. What you have written even couple years ago is quite often no longer important: progress is fast, new design patterns, new libraries, new, more advanced coding approaches become popular, you learn, you continuously improve your skills only to be judged by what?..

Sometimes, in some cases, it makes sense to filter candidates by looking at their GH. But I would not bet on that: the better filter is the code being written today, which is unlikely to be available for most of the developers.

29
thehardsphere 8 hours ago 0 replies      
> Eventually a vast majority will be working in the open, and it will again be a level playing field differentiated on other factors.

Haven't people been saying this for like, 10 years now?

30
austincheney 5 hours ago 0 replies      
The article is both very true and not yet completely true. Some developers still believe having a large community or online social media presence is important. It might very well be, but it has no bearing on code.

Code is objective. It solves problems, passes tests, does something new, and performs better.... or it doesn't. Social media presence has no bearing on this at all. Yet, each impart a type of online trust. One is capabilities (competence) trust and the other is marketing or branding trust.

For people who don't know the difference or don't understand the value in the code marketing/branding trust is the only thing that exists. For everybody else trust in branding loses value quickly and must struggle to compete with the other more objective trust factor. It should be noted this "everybody else" category is the minority but is more influential on things get built of prioritized.

31
KirinDave 7 hours ago 0 replies      
I think that looking at a GitHub profile is tricky. Very few people get to work on open source projects. And quite frankly, Linux's model of "trying to get noticed by the admins" doesn't scale well at all. There is a lot more to it than he's suggesting.

If you want to interview people effectively try this crazy formula:

0. Ask for a remote tech screen. Ask for a simple piece of work that you can evaluate and that resembles real work. Make sure it can be built and tested imperially (the best way is automatically). This should be simple and not 8 hours of work. Don't ask people to code on a whiteboard or do their best algorithmic design in a high stress interview session, you won't get it.

1. Be prepared. Know what you're interviewing for and list out he skills. Read a candidates resume and review the prior work they offer. Read their code first.

2. Code review their tech screen.

3. Make sure to ask questions about their approach to work, leadership and projects. Encourage them to ask you similar questions.

4. Have a broad cross section of your company interview. Mix up who does this, diversity is a plus here. Also, make sure designers and HR folks get a bit of time, not just engineers.

5. If the position is senior tech, whitebiarding is acceptable for architecture or planning or diagrams.

32
steven777400 7 hours ago 0 replies      
A decade ago when I was teaching computer programming full time at the community college, I told my students that their online portfolio would be the most important thing to differentiate them from their classmates.

I'm actually slightly less sure of this today... It's probably true for startups/"tech" companies; but many, many programmers work LOB in non-tech companies and I haven't heard/seen much that the hiring process has changed. If anything, we've come closer to the consensus that a small "work sample" is the best possible kind of interview - show up, here's a laptop and a small CR or project, implement. This has the advantage of being do-able remotely too, of course.

Pull requests back to languages and frameworks works if you're at the skill level and motivation to work at the framework level (e.g. Google, Facebook, Microsoft; the companies making the mainstream frameworks). But it's a dis-service to ignore many, many programmers working successfully at a lower level. It seems like the portfolio-as-key to being hired is still not very true for that category.

33
amirmansour 3 hours ago 0 replies      
I don't think most people doubt the utility of Github, but this writing seems to be from a person living in a bubble.

Most people don't get the time and resources to work on open source. Some times, you can't even open source something, despite wanting to. For example, when I was in academia, I wanted to open source some worthwhile projects I had completed, but was never allowed to by superiors, because they considered them as competitive advantage IP.

34
whack 7 hours ago 0 replies      
I have to say that this post was inspiring to read. It's everything that software developers had dreamed of and idealized. Owning your own career. Building your own brand. Doing your work in the open. Your career progress being decentralized and determined by the trust you've built in your network of colleagues, and not by some pointy haired business guy.

Unfortunately, I don't think any of the above is realistically going to happen at any point in the next 10 years. The majority of paid-for-code is proprietary, meaning that your 9-5 work-product can never be "googled" by outsiders. As a consequence of this, recruiters and hiring managers will continue to treat open-source work as second-class-indicators, behind your resume, CV and references. It's going to take a major paradigm shift in engineering/business culture, before any of this changes.

That said, the author might be wrong about the timeframe, but he does paint a noble vision for the future. One that I'm sure future generations will be working in, and one that I hope I can experience one day.

35
Humdeee 6 hours ago 0 replies      
When asked for a github profile or evidence of contributions I immediately apologize in advance on the application (I actually use bitbucket and have no public repos). I explain that every side project I have ever done outside of school is aimed to help the public user but also serves as a means for financial side income. Therefore, it is confidential and professional code that I will not share.

This creates 2 things as a byproduct:

- It keeps my standards high as it's a paid for product

- It forces me to actively maintain the projects and even provide a tiny bit of customer service

I really don't care if people have a problem with this. It's their problem at that point, not mine. I have no issue demonstrating anything or talking about it, but we're not going over lines of code in my stuff. I choose not to treat my code hosting platform as a social network.

36
5ilv3r 5 hours ago 0 replies      
One thing not mentioned is that the reputation of the programmer is tied to contribution quality. MOST programmers make low quality contributions because they are hired en mass to develop a project at a price point. They code for a buck. They are the outsourced projects, the rushed projects, and the tools made by people who obviously have no clue what they are doing.

Github will allow the cream to rise to the top, sure, but those "Just a job" programmer roles are going to exist for a long time. This is because many companies simply want contract fulfilling crap, not quality code.

37
lithos 6 hours ago 0 replies      
If this becomes more true I'm glad that I choose electrical engineering over software whatever-you're-called-now.

I can't imagine spending three years of weekends developing x companies software for a chance at an interview.

38
pselbert 7 hours ago 2 replies      
I find it difficult to imagine doing extensive web work (Rails, Django, PHP, JS) without ever touching GitHub.

Surely, in the course of your daily work using giant open source projects you'll file a bug, comment on an issue or even submit a pull request. Most large applications are built off of dozens, or hundreds, of open source librariesall of which have bugs at various times.

I don't expect all web developers to have side-projects or libraries, but I would expect they'll interact with open source projects in some way. This way of thinking clearly doesn't apply in more closed source worlds like games, finance or giant enterprisebut it certainly holds for startup/web work in my experience.

39
kenoyer130 5 hours ago 0 replies      
I do a lot of interviews and I tell the interviewee that a github project with a clean commit history is worth the same or more then a college degree. Been burned many times by people who can do tricky CS stuff but don't actually do anything on the job. It doesn't have to be a large project, just enough to show me you know the basics of source control work flow and also how to communicate.
40
mdekkers 7 hours ago 0 replies      
For software developers coming from a primarily closed source background, its not really clear yet what just happened. To them, open source equals working for free in your spare time.

I'm guessing that would > 80% of all software developed globally?

41
tomc1985 5 hours ago 0 replies      
What is it with posts on Medium and people making these broad proclamations? Calm down folks, have some humility...
42
jstewartmobile 7 hours ago 0 replies      
This article is a kind of half-truth. Sure, RedHat may reach out to you if you're consistently providing value, though that's probably the exception.

Look at how much OpenBSD code gets used in highly profitable commercial products, then compare it to the level of donations they get from the same companies...

43
gravyboat 4 hours ago 0 replies      
Yawn, another one of these? Sure some companies look at your GitHub profile, but this post is just conjecture based on one person's very small bubble.
44
Asooka 8 hours ago 1 reply      
> Smart people will take advantage of thistheyll contribute patches, issues, and comments upstream to the languages and frameworks that they use on the daily in their jobTypeScript, .NET, Redux.

Yeah except for the tiny niggle that an overwhelming majority of contracts stipulate that you can't actually contribute to OSS either on or off the job due to the fact that every single thing you think of while employed (and sometimes for a period after your employment ends) belongs fully to your employer.

45
awinter-py 5 hours ago 0 replies      
ok but the actual impact on my career is that github distributes for free something that I sell.
46
dudul 4 hours ago 0 replies      
Am I the only one who thinks that writing code is only one aspect of a software developer's job? Sure you can write good code and submit cool patches. How about talking to product owners? How about learning the ins and outs of an industry? How about identifying how to evolve a product in the real world? How about mentoring other developers?

OK, a nice GitHub profile is a plus, but as a hiring manager it is never a make-it-or-break-it kind of thing.

21
Long-winded speech could be early sign of Alzheimer's disease, says study theguardian.com
64 points by Hooke  8 hours ago   37 comments top 9
1
Declanomous 7 hours ago 3 replies      
> Worsening mental imprecision was the key, rather than people simply being verbose, however. Many individuals may be long-winded, thats not a concern, said Sherman.

For a minute I was worried I might be losing my mind. Thankfully I've always been garrulous, though I'm sure other people don't see that as a blessing.

2
hoprocker 1 hour ago 0 replies      
"Ronald Reagan started to have a decline in the number of unique words with repetitions of statements over time, said Sherman. [He] started using more fillers, more empty phrases, like thing or something or things like basically or actually or well."

Seems like history might be coming around again on this one.

3
rconti 5 hours ago 0 replies      
"... I was wearing an onion on my belt, which was the style at the time..."
4
pklausler 4 hours ago 1 reply      
Well, that's a relief. I'm getting older, but I've actually been speaking less, mostly because it's a waste of time to speak when nobody is actually paying attention.
5
sofaofthedamned 2 hours ago 0 replies      
This is basically every academic I know, including the missus. I daren't show her this...
6
paulpauper 6 hours ago 2 replies      
Fred visited Bob after his graduation

such a sentence could be ambiguous because it's not obvious who graduated. Some may argue it's bob

7
torrance 2 hours ago 0 replies      
When I read this, I assumed it was an indirect jab at Trump.
9
w00tw00tw00t 7 hours ago 0 replies      
Long winded title could be a sign of long winded article
22
Lets Encrypt, OAuth 2, and Kubernetes Ingress fromatob.com
147 points by fortytw2  9 hours ago   26 comments top 8
1
andrewstuart2 7 hours ago 2 replies      
Suggestion to anybody reading this: don't use a DaemonSet for this. This really ought to be a Deployment of nginx-ingress resources behind a service exposed as `type: LoadBalancer` (if you're in a cloud-provider that supports LoadBalancer services). Then just create DNS aliases and configure nginx to do session affinity if needed, etc. Not only will it be able to scale with your load instead of cluster size, but you can actually update it in a rolling update already; DaemonSets cannot yet do that.

Really the most important part, though, is that DaemonSets are for services that need to run on each host. Like a log collection service [1] or prometheus node exporter [2].

[1] https://github.com/kubernetes/kubernetes/tree/master/cluster...

[2] https://github.com/prometheus/node_exporter

2
rusht 6 hours ago 0 replies      
It's worth noting that there is a discussion on GitHub [0] about building letsencrypt auto cert creation directly into ingress controller.

[0] https://github.com/kubernetes/kubernetes/issues/19899

3
captn3m0 2 hours ago 0 replies      
>On GCP, the HTTP load balancers do not support TLS-SNI, which means you need a new frontend IP address per SSL certificate you have. For internal services, this is a pain, as you cannot point a wildcard DNS entry to a single IP, like * .fromatob.com and then have everything just work.

Wouldn't a wildcard SSL cert + wildcard DNS entry work even without SNI support here? I haven't used the GCP load balancer, but as long as you are serving a single certificate (* .fromatob.com), the client/server don't have to rely on SNI at all.

4
theptip 4 hours ago 3 replies      
Note one significant gotcha with this approach: the Ingress does TLS termination, so the hop from the Ingress to your pod is unencrypted.

That might be OK if 1) your data isn's sensitive or 2) you're running on your own metal (and so you control the network), but in GKE your nodes are on Google's SDN, and so you're sending your traffic across their DCs in the clear.

There are a couple of pieces of hard-to-find config required to achieve TLS-to-the-pod with Ingress:

1) You need to enable ssl-passthrough on your nginx ingress; this is a simple annotation: https://github.com/kubernetes/contrib/issues/1854. This will use nginx's streaming mode to route requests with SNI without terminating the TLS connection.

2) Now you'll need a way of getting your certs into the pod; kube-lego attaches the certs to the Ingress pod, which is not what you want for TLS-to-the-pod. https://github.com/PalmStoneGames/kube-cert-manager/ lets you do this in an automated way, by creating k8s secrets containing the letsencrypt certs.

3) Your pods will need an SSL proxy to terminate the TLS connection. I use a modified version of https://github.com/GoogleCloudPlatform/nginx-ssl-proxy.

4) You'll want a way to dynamically create DNS entries; Mate is a good approach here. Note that once you enable automatic DNS names for your Services, then it becomes less important to share a single public IP using SNI. You can actually abandon the Ingress, and have Mate set up your generated DNS records to point to the Service's LoadBalancer IP.

(As an aside, if you stick with Nginx Ingress, you can connect it to the outside world using a Kubernetes loadbalancer, instead of having to use a Terraform LB; the (hard-to-find and fairly new) config flag for that is `publish-service` (https://github.com/kubernetes/ingress/blob/master/core/pkg/i...).

5
zalmoxes 9 hours ago 1 reply      
That's cool, I've done pretty much the same thing for our internal services. I noticed you use the github org for oauth2proxy.

In our setup, I wanted to add authentication to a few dozen sub domains, but use a single oauth2proxy instance. Github Oauth makes this kind of gross, the callback must point to the same subdomain you're trying to authenticate. But it does allow something like /oauth2/callback/route.to.this.instead

In the end, to achieve what I wanted (a single oauth2proxy for multiple internal services) I had to- fork oauth2proxy and make a few small changes to the redirect-url implementation- create a small service with takes oauth.acme.co/oauth2/callback/subdomain.acme.co and redirects to subdomain.acme.co to comply with GitHub' oauth requirements- created a small reverse proxy in Go which does something similar to nginx_auth_request. I had a few specific reasons to do this (like proxying websockets and supporting JWT directly)https://gist.github.com/groob/ea563ea1f3092449cd75eeb78213cd...

I hope that someone ends up writing a k8s ingress controller specific to this use case.

6
agentgt 8 hours ago 1 reply      
Question for the author: We just migrated some stuff to GCP as well but do not use kubernetes. For managing infrasructure we only use packer, bash, and google cloud deployment yaml files (similar to the kubernetes manifest).

Why do you still need saltstack and how do you find terraform? Why do you need terraform (I suppose it is for your non kubernetes infrastructure?)?

7
linkmotif 8 hours ago 0 replies      
Discovered kube-lego via Google a few weeks ago and I am really excited to try it with my next product. Thanks for this post.
8
blwide 7 hours ago 0 replies      
That's impressive but also quite some effort. Feels like premature optimization when looking at the (rather low) traffic of fromAtoB. On the other side, it's always good to have a scalable deployment when dealing with RoR apps.
23
WhatsApp launches Status, an encrypted Snapchat Stories clone techcrunch.com
110 points by mcjiggerlog  10 hours ago   109 comments top 25
1
untog 9 hours ago 5 replies      
This individual feature isn't so bad, but I still dislike the precedent - WhatsApp is an amazingly fast, efficient app right now. It'll be a shame to see it descend into the usual junk as they pack in feature after feature no one really asked for.

My parents use WhatsApp on generations-old phones - it performs great and it's simple enough to understand. Why can't we keep the more complex stuff over on Snapchat? Or Instagram. Or Facebook Messenger. Or...

2
reitanqild 8 hours ago 2 replies      
Time for the weekly reminder that if you don't have reason to believe that powerful adversaries are after you then you can use Telegram. If you feel just fine publishing on twitter or HN under you real name I guess you qualify.

Bragging about E2E encryption while feeding the Facebook data monster IMO is a bit like bragging about how you transport your slaves in armored vehicles:

Yes, they are safer against robberies.

No, [given my current threat model] I'd still prefer driving something less secure that isn't abusing my every action [and every action of everyone I communicate with] for the profit of Facebook.

Edit: clarifications, in square brackets and below

It seems no doubt that Whatsapp is safer against an 3rd party adversary.

My points are only that

- I consider Facebook an adversary at this point,

- I don't belive they bought Whatsapp and removed the fees because of the goodness in Mark Zuckerbergs heart,

- I don't believe they would update their privacy policy if they somehow thought they could get away with what they are planning to do under the old privacy policy.

3
mderazon 6 minutes ago 0 replies      
I wait for the day I would be able to send money between friends via WhatsApp.It's so ubiquitous that it can have really serious impact on economy by making cash obsolete
4
pdog 9 hours ago 2 replies      
Hopefully this pressures Snap to offer end-to-end encryption within their app. WhatsApp is end-to-end encrypted by default[1][2][3].

It's embarassing that a major app like Snapchatbuilt around ephemerality and privacy and often handling sensitive datastill doesn't have any form of end-to-end encryption.

[1]: https://www.whatsapp.com/faq/en/general/28030015

[2]: https://www.whatsapp.com/security/

[3]: https://www.whatsapp.com/security/WhatsApp-Security-Whitepap...

5
msoad 2 hours ago 1 reply      
I think it's very easy for Facebook to add stories to each of their apps. It's a single code base I believe!

Stories are now in:

 - Instagram - Messenger - WhatsApp - Facebook (soon, I've seen the beta)
They are going after $SNAP in each and every warfront!

6
thanatropism 8 hours ago 0 replies      
I'm kind of hoping this comes to my phone really soon. Three reasons:

* I miss having status lines as a visible message to the world. I know this isn't exactly the same thing and that Whatsapp/Gtalk/etc. have those, but they have been de-emphasized, so saying "man, I'm excited about $this" isn't likely to reach my friends.

* I'm always conflicted between using Facebook for "thoughtful" stuff (that won't embarass me in six months' time) and just posting from random whimsical observations ("I saw a pretty butterfly!") and moods/feelings. My Facebook network is too wide now, too.

* I did install Snapchat to check it out, but it's just not for my demographic. Younger people take to the internet to complement and boost their meatspace life; we 30-somethings gradually drift apart from friends but want to keep some semblance (or even illusion) of a friend base that is alive.

Overall, I've been using Facebook as a degenerated blogging/syndication platform, but miss the social features of a social platform. Hey, when is the update getting to international iPhone App Stores? I want to try it!

7
ClassyJacket 7 hours ago 1 reply      
I dislike the trend of all apps having to be Snapchat. It adds bloat, and even if well coded and running on a powerful phone, it adds bloat to the interface. Not every app has to be a jack-of-all-trades. I've particularly found it frustrating how Facebook Messenger is no longer a clean list of conversations sorted by most recent. Now I have to scroll through games, rooms, their Snapchat clone, online now, suggested friends, etc. just to get back to my friend I talked to yesterday.

On the other hand, Snapchat is an awful piece of software, and some competition to prod them into fixing it would be useful.

8
kirkdouglas 9 hours ago 4 replies      
Time to move to Signal I guess.Seems like they will turn WhatsApp into bloated mess like Facebook Messenger currently is.
9
darkknight265 5 hours ago 0 replies      
Snapchat is weak in the very markets that Whatsapp is strong. Snap's argument for why is that their bandwidth heavy product does not do well in developing countries. This is a direct challenge to that reasoning -- is it the infrastructure (phone/bandwidth) that is holding people back or the lack of a network effect?

As infrastructure improves, Whatsapp is making the bet that users will prefer to use these features on an app that is already their primary social network.

I'm skeptical of the paternalistic arguments on HN that people don't really want these features -- perhaps the reason Whatsapp users don't use Snapchat is that their social graph hasn't moved to it, not that they don't want to share 'stories'.

10
ploggingdev 8 hours ago 2 replies      
I used to wonder how Facebook planned to monetize WhatsApp, I am beginning to find the answer :

> Status could also open up new advertising opportunities for WhatsApp. If it followed Snap and Instagrams lead, it could insert full-screen ads in-between friends Statuses.

I really liked WhatsApp's business model before the aquisition : user's pay a small annual fee to use the app. What was cool to see was that the network effects were so strong that people who had never paid for an app or subscription service paid for WhatsApp. If they kept the service paid I doubt it would have reached the 1 billion users mark so quickly, but just humour me here : with 1 billion users they would have atleast 1 billion dollars in ARR. That would have been cool. They could have focused on what they do best : provide a no BS end to end encrypted messenger which respects the user's privacy. (Yes, I am aware of Signal and I use it).

I am curious to see how Facebook balances the need to monetize vs to the need to maintain WhatsApp's reputation as a service that respects users' privacy.

11
asadlionpk 8 hours ago 0 replies      
Facebook, to me, has reached it's peak. It's waiting to be disrupted.
12
qznc 8 hours ago 1 reply      
I would prefer if they would add useful features instead of just addictive ones. For example, polling like Threema.

https://threema.ch/en/blog/posts/threema-poll-feature

13
gagabity 9 hours ago 0 replies      
I hope they don't ruin WhatsApp they have already turned Messenger into a bloated pile of something and I have noticed my Instagram is a lot slower after the "stories" feature.
14
vuyani 9 hours ago 1 reply      
This is a big F U to snapchat after they refused zuckerbergs offer
15
CodeSheikh 7 hours ago 0 replies      
This is (mighty) Facebook trying to convey a strong point across to Snapchat by throwing knock out punches from all sides (ephemeral stories in Instagram, Facebook and WhatsApp).
16
funkyy 9 hours ago 0 replies      
One day, there will be a tech giant, that will let us disable all new, amazing, revolutionary, bloated features instead of forcing us to use them. One day...
17
cerved 3 hours ago 0 replies      
Stories in Messenger, Instagram and Facebook was clearly not enough.
18
77yy77yy 8 hours ago 2 replies      
One word: Telegram.
19
morcutt 8 hours ago 0 replies      
I wonder how this will play out. When Instagram Stories were introduced my friends mostly migrated to using Instagram Stories and my engagement on Snap went down.
20
t1o5 8 hours ago 0 replies      
Remember eBay's bright yellow to white background transition ?

WhatsApp owned by Facebook which also owns FB Messenger, its only a matter of time for this transition.

21
smpetrey 9 hours ago 0 replies      
So Facebook is on all out war to destroy $SNAP huh?
22
PleaseHelpMe 8 hours ago 0 replies      
Facebook really wants to kill Snapchat
23
mcjiggerlog 10 hours ago 5 replies      
I've been using Whatsapp since 2010 and this is the first time I've considered dropping it; all I want is an easy to use chat client. What the hell were they thinking? I'm no expert but I would guess that >50% of their userbase does not want this feature at all. My grandma uses Whatsapp!

I think this could be the beginning of the end for Whatsapp's ubiquity. It's such a shame as Whatsapp has such insane market penetration here (UK/Spain) that it is going to be a huge mess to try to switch to an alternative. I literally haven't received an SMS from a friend in years.

24
keythrow 4 hours ago 0 replies      
And SNAP's IPO is coming up!
25
jhildings 8 hours ago 0 replies      
Quite a bad name, considering there is already an IM client called status for Ethereum https://status.im/
24
Post-Olympic Abandonment medium.com
178 points by pmcpinto  13 hours ago   143 comments top 19
1
saalweachter 9 hours ago 11 replies      
Would it negatively impact the Olympics if we stopped rotating them at this point? I mean, besides the IOC missing out on the opportunity compete with FIFA on graft?

Let's just locate the Olympics in a permanent facility -- Greece, if they'll have them, for tradition. I think between "not rebuilding everything from scratch every four years" and the expectation of recurring events, the single host country could both run the games more efficiently and it would benefit the economy a lot more -- sure, it'd only benefit the one host country, but it would be going from "not benefiting a lot host countries (who go over-budget and never make back the money on tourism)" to "actually benefiting one host country".

Let the IOC auction off the opening show if you want to give one country a chance to show off every four years.

2
sizzzzlerz 8 hours ago 3 replies      
It seems to me that the countries that maintained their Olympic facilities (England, Australia, US, Norway, Germany, China) all tilt to the wealthier end of the spectrum. Conversely, the poorer ones, Brazil and Greece, really couldn't afford hosting the games in the first place, let alone maintaining the venues afterwards. These days, its the wealthier countries that turn down the offer to host because of the costs while the poorer ones, often led by corrupt and bankrupt governments, are awarded the "honor" resulting into even deeper poverty for their people and short-term glorification of the government.
3
santialbo 11 hours ago 1 reply      
I like to think that Barcelona is a good example of how Olympic Games should impact a city. Growth was controlled and it allowed for regeneration of some shady parts of the city. They didn't build crazy venues and the ones they did build are still in use today 25 years later.
4
lhopki01 12 hours ago 12 replies      
There are many articles about abandoned Olympic venues but I've never seen one about successful former Olympic venues. I'd like to see what happened to all the venues in London and Beijing. From my bit of knowledge a lot of the London ones were temporary or have been successfully transitioned.
5
Animats 5 hours ago 0 replies      
There were articles like that before the Rio games.

The Beijing facilities, which were in a city that could use big sports facilities, are now mostly abandoned.[1] The big "birds nest" stadium now has an ice rink in it, but the huge grandstands are unused. London is doing OK with their leftover facilities.[2] The ones that aren't near a large city, such as Sochi, are abandoned.

[1] http://www.reuters.com/news/picture/ghosts-of-olympics-past?...[2] https://www.theguardian.com/cities/davehillblog/2015/jul/23/...

6
grabcocque 8 hours ago 2 replies      
This is an entirely unfair representation of the current state of Olympic venues in London. The Olympic stadium is used by West Ham Football Club every week for league and championship football. The Olympic village has been converted into state of the art affordable housing set in a gorgeous new green space.

The swimming and diving and velodrome facilities are state of the art, oversubscribed, and at the forefront of creating a new generation of athletes. I myself am having Kayaking lessons at the London White Water centre Olympic course.

Of course other major Olympic venues predated the bid and have continued to be in continuous use. Wimbledon, Wembley, the Excel, the O2, Eton Dorney...

London has not simply "abandoned" anything, and to pretend otherwise is dishonest.

7
sly010 3 hours ago 0 replies      
As the article also mentions, London spent a lot of it's olympic preparations thinking about what will happen to all the building and materials after the games. I was in design school at the time and we all submitted proposals for reusable architecture, etc. There was so much idealism, we all assumed things will be turned into sculptures or street furniture and the olympic park will itself will become an open site where people will hang out, etc.I wasn't in London for the games, but planned to go back to see the park after. When I got there a month after the Olympics, it was all abandoned, closed, fenced around and all you could see is giant empty parking lots. If they did any recycling I didn't see any of it.
8
mixmastamyk 4 hours ago 1 reply      
I had hoped for more investment in sports in Brazil as a result of the games, and indeed the transportation improvements in/out of Barra are incredible. But unfortunately, the govt of Rio is now totally broke and on the brink of failure.

To put it into perspective, in much of the country they have stopped paying their police officers a salary. ((boggle)) Not the safest city in the first place, and maintenance was never a strong suit of Brazil. So the stadiums are just a symptom.

Did they move the futebol games to Engenhao or something? Why would Maracana be abandoned? Where are they playing all the games that still need to be played? It would make sense to rotate them to keep the stadiums used at least part of the month.

I'm trying to think of a way to maintain them and provide sports activities for kids. Normally, I dislike the branding of stadiums, but perhaps a corporate sponsorship of each stadium is a solution, while the govt should focus on its citizens.

9
marcosdumay 7 hours ago 1 reply      
Rio is a bad place to look at. The place is broken for the same reason its last 3 governors are on jail (and older ones have been there), not because it hosted the Olympics last year.

I don't think the ROI was positive, but I also don't think that level of abandonment will last.

10
emondi 11 hours ago 2 replies      
Starts with a bad example. The maracan is from the 1950 world cup, also used last world cup.
11
a3n 4 hours ago 0 replies      
Any local government or ngo that proposes to its local citizens that it should host an Olympics should immediately be investigated for corruption. (Tongue barely in cheek.)
12
roystonvassey 10 hours ago 1 reply      
'Average overrun for Games since 1960 is 156% in real terms' [1]. On the whole, costs of hosting Olympic Games is a net-negative for most cities.

I think it isn't as popular as it once was in attracting tourists; soccer world cups are probably better for that. Given that this is the case, they should just fix one host city/country and host the Olympics only there. It will save unnecessary expenses, especially for developing nations such as Brazil.

EDIT: Added source here

[1] https://en.wikipedia.org/wiki/Cost_of_the_Olympic_Games

13
ryderfast 5 hours ago 0 replies      
Why not just have one venue, but have different hosts organise the opening ceremony? Stick one large venue on an island somewhere, and then hand the responsibility of running it to a different nation each time.
14
erelde 12 hours ago 0 replies      
Makes me think of the Roman's tradition of building wood theatres, but now instead of wood and clearly saying "temporary" it's cement promising "stability and prosperity".
15
praptak 7 hours ago 0 replies      
Big sport events are a huge scam. The corrupt organizations that have the rights keep money from ads, tickets and bribes, sometimes even requiring changes to the local law to protect their revenue. The supposed benefit for the host is that they can keep the infrastructure, paid by their own money. How generous!

Fortunately the public opinion has generally recognized this fact and there are protests against organizers. I for one, am proud that we told the fuckers to GTFO of Krakw.

16
prodikl 7 hours ago 0 replies      
Living in Seoul, I played hockey at the olympic stadium there. There are regular baseball games and the entire area is very developed.
17
sandworm101 6 hours ago 1 reply      
Glad to see Vancouver doesnt show in any of these articles. It can be done well enough. Nobody seems to have any great complaints about those games (2010). The facilities are still in regular use ... which one would expect of a winter games in canada.
18
lawless123 12 hours ago 0 replies      
There is analogy here to be drawn with winning the lottery. Both look great but lead to ruin if not managed properly.
19
warcode 12 hours ago 2 replies      
There really should be a rule that you have to maintain and operate the olympic venues as a public serivce for at least 4 years after their completion if you want to host.
25
Let's Encrypt appears to issue a certificate for a domain that doesn't exist twitter.com
55 points by 542458  1 hour ago   18 comments top 8
1
jaas 5 minutes ago 1 reply      
Head of Let's Encrypt here. Our team is looking into this and so far we don't see any evidence of mis-issuance in our logs. It looks like the domain in question, 'apple-id-2.com', was registered and DNS resolved for it successfully at time of issuance. Here is the valid authorization record including the resolved IP addresses for 'apple-id-2.com':

https://acme-v01.api.letsencrypt.org/acme/authz/uZGv2KXUJ6Hl...

We can't be sure why the reporter was unable to find a WHOIS record, we can only confirm that validation properly succeeded at time of issuance.

2
Keverw 1 hour ago 2 replies      
I wonder if maybe someone had that domain and then deleted it. Dynadot(and probably others) for example lets you delete domains for a partial refund within a certain limited amount of time - which I think is a neat feature.

I know there are historical whois sites, but as far as I know unless someone in the past checked for the domain with their service, they'd have no record of it otherwise. So maybe that would explain how it has a cert for a domain that currently does not exist and appears to never been registered.

3
jlgaddis 1 hour ago 2 replies      
Summary:

* Domain apple-id-2.com is not currently registered

* Domain apple-id-2.com has (apparently) never been registered

* LetsEncrypt, on 2017-01-03, issued a valid certificate for apple-id-2.com

Since we can't know how validation was successfully performed, all we can do is speculate. Someone from LetsEncrypt will have to investigate and let us know. Fortunately, they should have very detailed audit logs for exactly this purpose.

4
jwilk 22 minutes ago 0 replies      
A certificate for apple-id-1.com was acquired on the same day:

https://crt.sh/?id=72482400

There's some evidence that apple-id-1.com existed back then:

https://www.thedailywhois.com/2017-01-03/apple-id-1.com

5
movedx 1 hour ago 1 reply      
My guess is they had a localised DNS resolution failure for the domain and some how hit a web server, or something else, that ticked all the boxes and granted the certificate. A long shot, though.
6
X-Istence 49 minutes ago 0 replies      
Domain tasting? Get a cert for it, then it expires?
7
apetresc 1 hour ago 2 replies      
As someone who doesn't know much about the inner workings of the whole TLS stack, why is this surprising? I know in the past I've self-signed certificates for domains that only existed on my LAN. Why couldn't that be the case here? Why shouldn't a central signing authority issue certificates for people's intranets?
8
jeffcactus 16 minutes ago 1 reply      
Looks like it was registered quite recently.

;; ANSWER SECTION:apple-id-2.com.500INA50.63.202.53

Creation Date: 2017-02-22T21:57:50Z

The cert was issued in Januaryparsed.validity.start2017-01-03T22:17:00Z

Not totally surprising as I have never seen Let's Encrypt do anything that could be remotely considered diligence in not doing shady stuff.

26
SpaceX Dragon Rendezvous and Docking Waved Off for Today nasa.gov
90 points by joss82  10 hours ago   29 comments top 4
1
source99 8 hours ago 0 replies      
r/spacex says the issue was incorrect data was uploaded to Dragon. Not a hardware or software issue on Dragon itself.
2
espadrine 9 hours ago 1 reply      
In case SpaceX is reading: I would love to read a postmortem.

After having built Spash[0], I have much more appreciation for the difficulty of correctly computing spacial locations.

[0]: https://espadrine.github.io/spash

3
benmarks 9 hours ago 1 reply      
This is fascinating to someone working in the open source Web application space. Large (but decreasing, thankfully) pockets of our culture still need convincing about test coverage; clients still can't be sold on its value.
4
joss82 10 hours ago 4 replies      
I feel like space is becoming less of a "let's cross our fingers and hope it works on the first try" culture. See space shuttle landing for a typical example.
27
Linux kernel: CVE-2017-6074: DCCP double-free vulnerability (local root) seclists.org
159 points by QUFB  8 hours ago   67 comments top 13
1
jlgaddis 1 hour ago 0 replies      

 $ echo "install dccp /bin/true" >> /etc/modprobe.d/dccp.conf $ sudo rmmod dccp # in case it's already loaded
This is a good idea for any modules you don't expect to ever need. In my case:

 $ cat /etc/modprobe.d/disabled_modules.conf install appletalk /bin/true install bluetooth /bin/true install cramfs /bin/true install dccp /bin/true install firewire-core /bin/true *snipped* install tipc /bin/true install udf /bin/true install usb-storage /bin/true install vfat /bin/true
The original -- to me -- recommendations for this were found in some "hardening guide" for RHEL (CIS, NSA, etc.), although I don't remember which.

See also the "modprobe.blacklist=" kernel parameter, which you'll have to use for "modules" that are compiled into the kernel itself (i.e., they are not actually loadable kernel modules).

15 years ago, when building your own kernels was a normal everyday thing, I simply built my kernels with everything compiled in and modules disabled. This (would have) prevented attacks such as kernel-level rootkits.

In addition, "one neat trick" was that you could halt (not poweroff) the machine (!) -- such as in the case of a Linux box simply acting as a router/firewall -- and the kernel would still be running. Good luck compromising that! :-)

2
wronskian 6 hours ago 2 replies      
The diff from the patch:

 - goto discard; + consume_skb(skb); + return 0;
One of the rare cases in the wild where a goto really was considered harmful! ;-)

3
unmole 5 hours ago 0 replies      
Here's an overview of DCCP I wrote if anyone is interested: https://www.anmolsarma.in/post/dccp/
5
geofft 6 hours ago 7 replies      
This is a good reason for systems running untrusted code to disable module automatic loading. Almost nobody uses DCCP, and as a result, almost nobody looks at the DCCP code, writes bad userspace apps that trigger kernel bugs that get debugged, etc. We rarely see double-frees in the TCP or UDP implementations.

On my Debian kernel, CONFIG_IP_DCCP is set to "m" (in /boot/config-`uname -r`), which means that DCCP support is built as a module. The code isn't loaded until the first program tries to call socket(...IPPROTO_DCCP). At that point, the kernel will look at /proc/sys/kernel/modprobe and run that program, /sbin/modprobe by default, to load dccp.ko.

Automatic module loading is great when e.g. udev runs and detects what hardware you have, but it's probably not something you'd ever need once a system has completed boot. A very simple hardening measure for machines running untrusted unprivileged code is to echo /bin/false > /proc/sys/kernel/modprobe, late in the boot process (e.g., in /etc/rc.local).

The downside is that system administrator won't be able to run tools that require loading modules, of which iptables is probably the most notable one. A better option than /bin/false is a shell script that logs its arguments to syslog, e.g., `logger -p authpriv.info -- "Refused modprobe $*"`. The sysadmin can manually run modprobe on whatever module name got syslogged (or temporarily set /proc/sys/kernel/modprobe back to /sbin/modprobe). And you can alert on that syslog line to see if there's an attack in progress.

(Does anyone know if it's possible to disable module auto-loading for a tree/namespace of processes, e.g. a Docker container, but keep it working for the base system?)

6
Wheaties466 7 hours ago 2 replies      
Let me see if I got this right, This can be used to DOS a system by consuming all free memory?

In the CVE it almost hints that this specifically is UDP related.

Am I right in thinking this?

7
DannyBee 2 hours ago 0 replies      
syzkaller strikes again!
8
lossolo 4 hours ago 0 replies      
Already fixed on ubuntu

linux (4.4.0-64.85) xenial; urgency=low

 * CVE-2017-6074 (LP: #1665935) - dccp: fix freeing skb too early for IPV6_RECVPKTINFO -- Stefan Bader <stefan.bader@canonical.com> Mon, 20 Feb 2017 10:06:47

9
arca_vorago 5 hours ago 1 reply      
DCCP is something even a basic hardening should already have taken care of... but of course many people don't do those.

Quick solution: "echo "install dccp /bin/true" >> /etc/modprobe.d/modprobe.conf"

10
m00dy 7 hours ago 0 replies      
It looks legit.
11
vasili111 5 hours ago 0 replies      
Does the Gentoo hardened vulnerable too?
12
neoeldex 6 hours ago 1 reply      
I can't wait untill the linux kernel is ported to Rust ^^
13
nyiihahah 4 hours ago 1 reply      
Why isn't there any Cryptocurrency for fuzzing well-known softwares?https://security.stackexchange.com/questions/152036/why-isnt...
29
AMD Ryzen price and release date revealed pcworld.com
232 points by kungfudoi  7 hours ago   105 comments top 14
1
mrb 6 hours ago 2 replies      
First (amateur) third-party benchmarks confirm a ~$350 Ryzen beats $1000 Intel processors in CPUMARK, 3DMark Fire Strike Physics, Cinebench: https://www.chiphell.com/thread-1706915-1-1.html

AMD had said for years their Zen goal was a 40% IPC gain over their previous microarchitecture, but they ended up with a 52% gain: http://www.anandtech.com/show/11143/amd-launch-ryzen-52-more...

Today's launch event by AMD's CEO:https://www.youtube.com/watch?v=1v44wWAOHn8

2
tracker1 5 hours ago 1 reply      
I just have to say, I really hope that this is real. AMD has burned a lot of trust a few times in overzealously stating performance marks in their prior generations of CPUs. I ran AMD for several years until the Core-2 came out from Intel (currently very happy with an i7-4790K).

If these comparisons are real, my next build may be AMD again.

3
youdontknowtho 6 hours ago 3 replies      
Really looking forward to this, and I'm glad that AMD's management no longer seems to be on the pipe.

Even if the real world perf is close to the $1K Intel chips it will be a win. It's going to force price cuts from Intel and hopefully spark some competition again.

4
nimos 5 hours ago 0 replies      
I think the low TDPs have to be the scariest thing for Intel. Really interested to see what they can do in the 15-25W range for notebooks and their server stuff.
5
adamnemecek 5 hours ago 1 reply      
I can't think of a more spectacular comeback than AMD in the last year or so. They are positioned pretty well for the inevitable convergence of CPU and GPU. That CUDA crosscompiler was such a brilliant move.
6
Asdfbla 27 minutes ago 0 replies      
If AMD makes a comeback, how did they do it? Not enough competition and Intel got complacent or what? Or did they just reach the limits and it's not possible anymore to keep the lead and prevent AMD from catching up?
7
walrus01 1 hour ago 0 replies      
I was highly skeptical when the first info about Ryzen came out. This is looking really promising versus the $300 to $400 price range Kaby Lake i7 (7700, 7700K) CPUs.

And when comparing the 1700/1700X to the $200-250 price range i5-7600 Kaby Lake.

8
crudbug 1 hour ago 0 replies      
Interesting to see multi-core wars starting here.

Intel did not do anything as a market leader, 8 years back I could still buy 4 core machine. Waiting to see, how AMD does on Server parts, 32 cores / 64 cores ? Power9 does 24 cores/ 96 threads.

9
Symmetry 7 hours ago 2 replies      
I'm optimistically looking forward until the independent benchmarks on March 2nd.
10
toxican 4 hours ago 1 reply      
I'm far too broke to shell out cash for a new Mobo and CPU right now, but I'm excited to see what this does to Intel's prices for older CPUs. I'd love to upgrade my i5-2400 soon-ish.
11
gnipgnip 6 hours ago 2 replies      
I wish people benchmark GEMM performance for all of us math folk.
12
laura2013 2 hours ago 0 replies      
every time i look at AMD's stock price from last year to today i really wished i'd been wiser with my money. they've performed excellent and it's showing, this will likely be huge.
13
mtgx 5 hours ago 2 replies      
Jim Keller deserves a frigging statue in front of the AMD HQ, if he doesn't have one there already. I'm not even kidding. His efforts shouldn't be easily forgotten, and it could serve to inspire new generations of AMD engineers.

Beyond the product quality itself, I think AMD has had a pretty smart launch strategy by releasing the CPU chips first to show that it can beat "Intel's best".

But they really need to start focusing on notebooks ASAP. That's where they can steal most of the market from Intel, especially now that Intel is showing signs of (slowly) abandoning the notebook market, by prioritizing Xeons over notebook checks for its new node generations.

AMD should prioritize notebook chips either next year or the one after that, at the latest. They should be making the notebook chips first, before the PC ones. They need that market share and awareness in the consumer market.

In terms of how they should compete against Intel in the notebook market, I would do it like this (at the very least - AMD could do it even better, if it can):

vs Celeron: 2 Ryzen cores with SMT disabled

vs Pentium: 2 Ryzen cores with SMT enabled

vs Core i3: 4 Ryzen cores with SMT disabled. Or keep SMT and lower clock speeds, as Intel did it. This may help further push consumers as well as developers towards using "more cores/threads".

vs Core i5 (dual core): 4 cores with SMT enabled

vs Core i5 (quad core/no HT): 4 cores with SMT enabled + higher clocks and better pricing. Maybe even 6-core with HT, if AMD goes the 6-core route. I honestly don't even know why Intel decided to make "Core i5" a quad-core chip as well, and its Core i7 a dual-core chip as well. It's so damn confusing - but maybe that was the goal. For differentiation's sake, it may be better for AMD to have a 6-core at this level or maybe even an 8 core with SMT disabled - so same thing as Intel, but with twice the physical cores. I don't know why but for some reason 6-cores don't attract me much. It feels like an "incomplete" chip.

vs Core i7 (quad core/HT): 8 cores with SMT enabled

The guiding principle for this strategy should be "twice the cores or threads with competitive/better single-thread performance, and competitive/better pricing).

In a way it would be the inverse of the PC strategy where they maintain the number of cores but cut the price in half. This would mainly focus on doubling the number of cores (because notebooks come with so few in the first place), while maintaining similar or better pricing.

The only ones that don't really fit well in this strategy are the Celeron and Pentium competitors and that's because a dual-core Ryzen, even at low clock speeds should destroy Intel's Atom-based Celeron and Pentium. We could be looking at a least +50% performance difference, and that's what AMD should strive for there as well. AMD should show Intel what a mistake it made when it tried to sell overpriced smartphone chips for laptop chip prices.

14
arca_vorago 5 hours ago 0 replies      
This is really exciting, as I have been waiting to build a new system with the new AMD gear. Lots of people also don't hear about it, but I am super-excited to see the new server class CPU's. I built a quad opteron 6380 system and have been in love with them ever since, but they weren't perfect and had some issues I hope are fixed with this new line.
30
Acute exercise increases expression of telomere protective genes in heart tissue nih.gov
193 points by mhb  9 hours ago   74 comments top 9
1
carbocation 8 hours ago 4 replies      
It is hard for me to understand why a mouse study in a low-impact journal is at the top of HN.

The article does not appropriately adjust for multiple testing, and therefore none of its claims are well supported except the JNK2 decrease in post-exercise mice.

Full article is available at http://onlinelibrary.wiley.com/store/10.1113/EP086189/asset/...

2
CodeCube 9 hours ago 3 replies      
The pithy response is of course, "Exercise improves health. In other news, water is wet". But of course that's too simplistic ... this kind of research is awesome. Anything that can help us understand these mechanisms are one step closer to a literal fountain of youth.
3
giardini 5 hours ago 0 replies      
For those who cannot get beyond "mouse", here's an article and references about a human study:

"Lifestyle Changes Lengthen Telomeres"http://www.drmirkin.com/public/ezine092913.html

The study, by Dr. Dean Ornish, was published in "The Lancet Oncology" 17 September 2013 issue.

4
serg_chernata 8 hours ago 3 replies      
For anyone interested, a friendly plug for Dr Rhonda Patrick[1], who podcasts and speaks on nutrition, exercise, aging and telomeres in particular.

1. https://www.foundmyfitness.com/

5
Nomentatus 4 hours ago 0 replies      
I've assumed for a very long time that the purpose of telomeres was to prevent cancer mutations from killing a complex organism, since the resulting growth would be limited to X generations where X is the number of telomeres on the mutated strand of DNA, and then die without those cells being able to divide again.

So while I believe the study will likely hold up; I do wonder why exercise adds telomeres. One answer is that exercise reduces cancer risk (it will get you to bed on time, mostly) so the body optimistically adds teleomeres. Alternatively, and perhaps more likely, exercise may trigger more cell division (for purposes of repair, all exercise causes some damage, to collagen if nothing else) so the extra telomeres are added as compensation in order to return to status quo cap on allowed cell divisions; maintaining the preventative but not actually extending it.

6
wslh 9 hours ago 3 replies      
How much exercise this should be in humans?
7
nottorp 8 hours ago 4 replies      
Out of pure curiosity, what does this mean in english?
8
StClaire 8 hours ago 2 replies      
How would we check these results in humans? Can we safely biopsy someone's heart?
9
DaveSchmindel 6 hours ago 0 replies      
In other news, smoking is bad for your lungs!
       cached 22 February 2017 23:02:01 GMT