hacker news with inline top comments    .. more ..    19 Jan 2017 News
home   ask   best   2h 29m ago   
1
Google Has Started Penalizing Mobile Websites with Intrusive Pop-Up Ads scribblrs.com
125 points by sply  1 hour ago   56 comments top 14
1
Animats 7 minutes ago 1 reply      
What's really stupid are sites from which you can buy things, but then pop up an ad for something else. Fandango, which sells movie tickets, does this. As you're trying to get to the "buy ticket" page, they shove movie trailers for other movies in your face.

I mentioned a site earlier today which sold plumbing supplies.[1] They pop up a "gimme your email" box which 1) cannot be dismissed, and 2) isn't even theirs, it's from "justuno.com", a spamming service.

These outfits have lost sight of what their web site is for. They're putting obstacles in front of a customer who's about to give them money. This is usually considered a big mistake in retail.

[1] https://www.tushy.me/

2
netinstructions 48 minutes ago 1 reply      
Funny, because Google Adsense offers "Page-level vignette ads" which are full page interstitial ads shown for mobile devices.

The penalty must not apply because:

> They're displayed when the user leaves a page, rather than when they arrive on one, so the user doesnt have to wait for them to load

https://support.google.com/adsense/answer/6245304?hl=en

3
FreakyT 53 minutes ago 0 replies      
Good. Those have been becoming increasingly prevalent to the mobile web's detriment.

I don't mind a few ads, but many of these interstitials are downright maliciously designed, making the entire page load consistent on hitting a tiny "x" target, presumably designed with the intention of facilitating accidental clicks on the ad.

4
alphonsegaston 41 minutes ago 1 reply      
I'd really like to see them work on improving relevancy instead of swinging their corporate weight around at whatever "benevolent" end they decide is important this week. Considering how much time I have to spend nowadays tweaking queries and futzing with the search tool options to get relevant results, I'm starting to look at all of these moves much more cynically. Taking on anti-patterns is great, but not when your search experience is rapidly becoming one.
5
aresant 49 minutes ago 4 replies      
You mean intrusive like the AMP header on every !@$! mobile page now that's not only annoying but breaks the standard UX?
6
quadrangle 11 minutes ago 0 replies      
"Google Has Started Penalizing Mobile Websites with Intrusive Pop-Up Ads"

I totally read this as "Google Has Started Penalizing Mobile Websites [by using the penalty of imposed] Intrusive Pop-Up Ads" instead of "penalizing those websites that use Intrusive Pop-Up Ads"

7
jrochkind1 39 minutes ago 2 replies      
Why only "mobile websites"? I hate em just as much when I'm viewing on the desktop.
8
evolve2k 20 minutes ago 5 replies      
A client has just asked me to add a pop-up "whatch this vid, join our newsletter", when the user scrolls to about half way down the homepage for their SAAS startup. Further the pop up is not to reappear for 90 days.

They got the approach from attending an online marketing workshop that suggested this increases their list.

Felt like a bit of an anti-pattern to me.

Anyone have advice as to if this is effective or if it will be affected by today's announcement?

9
quadrangle 10 minutes ago 0 replies      
Use uBlock Origin people! In Firefox on Android. Don't see any pop-up ads! I can't believe how much pain people subjective themselves to needlessly!
10
chmars 49 minutes ago 2 replies      
What's about intrusive cookie warnings?

(They are apparently mandatory in the European Union and Google made them part of the Adwords rules some time ago.)

11
StuieK 32 minutes ago 0 replies      
If users hate these, shouldn't google's ability to rank the best pages already take care of this problem without special casing it?
12
bradlys 51 minutes ago 0 replies      
This title confused me. I thought Google was penalizing mobile websites by injecting intrusive pop-up ads.
13
eumenides1 53 minutes ago 3 replies      
I wish Google would penalize websites with pay walls
14
serge2k 43 minutes ago 1 reply      
Oh good, Google abusing their power again.

I guess as long as it's for "good" reasons.

edit: would any downvoters care to explain how google being able to arbitrarily dictate web content is a power they should have?

2
NHTSAs full investigation into Teslas Autopilot shows 40% crash rate reduction techcrunch.com
508 points by fmihaila  4 hours ago   201 comments top 16
1
Animats 1 hour ago 1 reply      
It's interesting how vague this is. There's an NTSB investigation still pending into a specific Tesla crash.[1] The goals are different. NHTSA asks "do we need to do a recall?" NTSB asks "exactly what, in detail, happened here?" NTSB mostly does air crashes, but occasionally they do an auto crash with unusual properties. Here's the NTSB report for the Midland, TX crash between a train and a parade float.[2] That has detailed measurements of everything. They even brought in a train and a truck to reconstruct the accident positions.

It took a combination of problems to cause that crash. The police lieutenant who had informed the railroad of the parade in previous years had retired, and his replacement didn't do it. The police marshalling the parade let it go through red lights. They were unaware that the traffic light near the railroad crossing was tied in to the crossing gates and signals. That's done to clear traffic from the tracks when a train is approaching before the gates go down. So ignoring the traffic signal took away 10 seconds of warning time. The driver thought the police had taken care of safety issues and was looking backwards at the trailer he was pulling, not sideways along the track. People at the parade were using air horns which sounded like a train horn, so the driver didn't notice the real train horn. That's what an NTSB investigation digs up. Those are worth reading to see how to analyze a failure.

[1] https://www.ntsb.gov/investigations/AccidentReports/Pages/HW...

[2] https://www.ntsb.gov/investigations/AccidentReports/Pages/HA...

2
snewman 4 hours ago 7 replies      
Tesla comes off extremely well in this report. For one thing, the 40% statistic cited in the headline appears to be well supported by the NHTSA report (section 5.4) and actually manages to frame the incident in a very positive light:

ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to and after Autopilot installation. Figure 11 shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.

I had hoped to see more information about this specific incident. For instance, any data on whether the driver had his hands on the wheel, what steps the car had taken to prompt his attention, etc. But that doesn't seem to be included.

3
xenadu02 4 hours ago 1 reply      
For those who don't want to signup to Scribd just to download a publicly available PDF: https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF
4
stale2002 4 hours ago 8 replies      
Oh, hey, will you look at that.

The imperfect, incomplete, beta, level 2 self driving cars that were supposed to be the "dangerous" area of self driving are ALREADY better than human drivers.

Can we stop the politics and deploy all the real self driving cars to the road immediately, since the government has proven that even the shitty variety is safer than humans?

5
sxp 4 hours ago 3 replies      
The 40% number isn't very informative. The report has multiple notes about it:

ODI analyzed data from crashes of Tesla Model S and Model X vehicles involving airbag deployments that occurred while operating in, or within 15 seconds of transitioning from, Autopilot mode. Some crashes involved impacts from other vehicles striking the Tesla from various directions with little to no warning to the Tesla driver.

ODI analyzed mileage and airbag deployment data supplied by Tesla for all MY 2014 through 2016 Model S and 2016 Model X vehicles equipped with the Autopilot Technology Package, either installed in the vehicle when sold or through an OTA update, to calculate crash rates by miles travelled prior to[21] and after Autopilot installation.[22] Figure 11 shows the rates calculated by ODI for airbag deployment crashes in the subject Tesla vehicles before and after Autosteer installation. The data show that the Tesla vehicles crash rate dropped by almost 40 percent after Autosteer installation.

21 Approximately one-third of the subject vehicles accumulated mileage prior to Autopilot installation.

22 The crash rates are for all miles travelled before and after Autopilot installation and are not limited to actual Autopilot use.

So the actual rates of crashes for Teslas using Autopilot vs Teslas not using Autopilot aren't reported.

6
randomstring 3 hours ago 1 reply      
Waiting for the headline: "Human Fails to Prevent Accident, Outraged Public Calls for Banning of all Human Drivers"

The obsession with perfection in self-driving cars is misplaced, they just need to be demonstrably better than humans.

This is obviously the future.

7
huangc10 4 hours ago 4 replies      
Can anyone who is in the industry comment on how Autopilot performs in poor weather (ie. flash floods, thunderstorms, snowstorms etc...)

All I can find from the article about weather was in section 3.1:

> The manual includes several additional warnings related to system limitations, use near pedestrians and cyclists, and use on winding roads with sharp curves or with slippery surfaces or poor weather conditions. The system does not prevent operation on any road types.

8
cbr 3 hours ago 1 reply      
This is really good news. A major worry with driverless cars has been that companies would be harshly punished for accidents, even if there was a dramatic reduction in crashes overall.
9
bcaulfield 3 hours ago 0 replies      
So I'm far less likely to crash if I use this, and I have something to blame if I do. Everybody wins! (Except the engineers).
10
battlebot 56 minutes ago 0 replies      
I don't completely trust the NTSA and I'm skeptical about auto-piloting cars but accept that more and more of those will be on the roads. I will never ride in a vehicle that lacks an override mechanism.

In general, I think we are moving way too fast towards these self-driving vehicles because certain factions want to try and replace long and short haul truckers with robotic systems that are cheaper and damn the consequences.

11
mrtron 2 hours ago 1 reply      
What other car company can even recover the airbag deployment rate per mile?
12
ChuckMcM 2 hours ago 0 replies      
That is a pretty remarkable report. It essentially holds Tesla up as an exemplar of the standard other car makers will be expected to achieve.
13
zekevermillion 2 hours ago 0 replies      
Impressive. 2/5 reduction is a lot of lives saved.
14
tn13 2 hours ago 0 replies      
40% figure is meaningless unless the absolute numbers are reported. How do we know if this difference is statistically significant ?
15
sandworm101 4 hours ago 3 replies      
Great. There is no doubt that driver assists cut down on crashes. But what tesla has on the road is far from a total eyes-closed autopilot. That is an inflection point with this tech that nobody has dared to test on the public road. I remain unconvinced pending those trials.

Also, still havent seen any autodrive handle off-road driving such as boarding a carferry or navigate a construction zone manned by an inattentive flag person.

16
dkonofalski 4 hours ago 3 replies      
I don't really know why this is surprising. Computers are already better than humans at most tasks that involve a limited set of behaviors and they have infinitely better response time than humans (and continue to get better). How could anyone think that a report like this was going to end up any differently?
3
How Discord Stores Billions of Messages Using Cassandra discordapp.com
67 points by jhgg  1 hour ago   11 comments top 3
1
jakebasile 27 minutes ago 2 replies      
I use Discord a fair amount, and something that annoys me about it is that everyone has their own server.

I realize this is a key part of the product, but the way I tend to use it is split into two modes:

- I hang out on a primary server with a few friends. We use it when we play games together.

- I get invited to someone else's server when I join up with them in a game.

The former use case is fine but the latter annoys me. I end up having N extra servers on my Discord client that I'll likely never use again. I get pings from their silly bot channels (seemingly even if I turn notifications off for that server/channel), and I show up in their member lists until I remove myself.

I wish there was a way to accept an invite as "temporary", so that it automatically goes away when I leave or shut down Discord. Maybe keep a history somewhere if I want to go back (and the invite is still valid).

Aside from that, it's a great product and really cleaned up the gamer-focused voice chat landscape. It confuses me that people will still use things like TeamSpeak or (god help you) Ventrilo when you can get a server on Discord for free with far better features.

Now that I posted this, I realize this has little to do with TFA. Sorry.

edit: formatting, apology

2
joaodlf 22 minutes ago 0 replies      
Not surprised to see other companies facing issues with Cassandra and tombstones. Don't get me wrong, I understand the need for tombstones in a distributed system like Cassandra... It doesn't make it any less of a pain though :).
3
no_protocol 48 minutes ago 2 replies      
Publicly naming a group of users who caused a problem with their system makes me uncomfortable enough to avoid trying their service. How do I know the next blog post won't be calling me out for doing something wrong?

Also, apparently you can't search past messages...yet.

4
FAP80, a retro computer without the retro baggage github.com
112 points by signa11  3 hours ago   40 comments top 14
1
lisper 2 hours ago 1 reply      
This is cool, but there's an argument to be made that the baggage is the whole point of doing retro computing. You can emulate a Z80 on an STM32 and it will run faster than the real thing. Much faster. So why bother with a real Z80? Because there is something ineffably cool about the "real thing". It's kind of like vinyl records. Digital audio is vastly superior by any objective measure, and yet people still want vinyl. The don't want it despite the clicks and pops and the annoyance of having to clean it and futz with the needle and the turntable speed, they want it because of all of these things.
2
chadcmulligan 10 minutes ago 0 replies      
Thought the retro people might like this - the first computer I built - https://en.wikipedia.org/wiki/Dick_Smith_Super-80_Computer.

Z80 based 16K ram, yes thats a K. 2MHz processor. Plugged it into a cassette player for storage. Loading programs was hit and miss. I later splurged and bought a Basic ROM, initially you had to program it in Hex. I used it for most of my uni days, running linear regressions for lab. It cost me $300 AUD then, all my school kid savings :-).

This was the book I learned assembler on https://www.amazon.com/Programming-Z80-Rodnay-Zaks/dp/089588... a classic.

3
XtalJ 2 hours ago 1 reply      
Really nice to see a Z80-project :) I'm actually finishing my own Z80 microcomputer this weekend. But mine will utilize a symphony of exotic 7400 series chips and hard-to-get Z80 peripheral chips. All ceramic ones. So if anyone wants to build their own, it will cost several hundreds of dollars :-/. But the idea with the project was to learn how computers where built in the early 80s, not to do a modern approach, like you :-)

Fun name, by the way. Mine will be named Calculon80 or Calculon64 :-D

4
jdcarter 3 hours ago 2 replies      
The blog posts have a ton more detail and make for fascinating reading (or browsing the pictures):

https://dekunukem.wordpress.com/

It's an interesting take on retro; gives the user the feel of a retro computer without all the gnarly period-accurate hardware. Also amusing that the STM32 supporting CPU has far more power than the main Z80 CPU.

5
sehugg 3 hours ago 1 reply      
Interesting. I was poking around the MAME database and found that the Z80 was used as recently as 2010 by a company called Igrosoft to make video slot machines, along with some modern PLDs and support chips. They also use a YM2149-compatible sound chip for that authentic bleepy sound.

More details: https://github.com/mamedev/mame/blob/master/src/mame/drivers...

6
jasonkostempski 2 hours ago 3 replies      
I assume awkward name is intentional.
7
noonespecial 2 hours ago 1 reply      
So instead of trying to recreate the good old days, I made the decision to liberally use modern parts to simplify the design process...

Then uses a thru-hole Z80.

If that's not a perfect illustration of nostalgia, I don't know what is. Remembering the good parts without all of the fuss of the not so good. Very nice project. I'd probably buy that as a hobby kit.

8
ENTP 2 hours ago 0 replies      
When i was a kid, i had a Mattel Aquarius. Those early days were great. Whilst this project does rekindle some of that nostalgia, I'm not sure I'll be FAPping anytime soon.
9
CodeWriter23 32 minutes ago 0 replies      
I think your project is awesome. Wish I had something like this 30 years ago. Having the Z80 clock controlled by a microcontroller is pure genius. We used an in-circuit emulator, and always wished we could write code for the ICE.
10
hvs 1 hour ago 0 replies      
I would also recommend looking at the RC2014 project. Kits are available for order on Tindie. The mailing list is very active and new developments happen regularly.

http://rc2014.co.uk/

11
gravypod 2 hours ago 1 reply      
I really would love someone to build something like this on one of those FRAM chips so there is no volitile memory on the system. I'd love to write an opterating system for such an enviroment (like those non-volatile time sharing systems from the 80s that made use of stateless microkernels and really pushed the limits of disk).
12
hoodoof 2 hours ago 0 replies      
Is there a youtube video of it in action?
13
amyjess 1 hour ago 0 replies      
Cool... the idea reminds me of TempleOS, which was meant to evoke the spirit of the C64 but run on and take advantage of modern hardware.
14
elpantalla 3 hours ago 0 replies      
Super cool. I'm impressed.
5
Introducing ProtonMail's Tor hidden service protonmail.com
219 points by vabmit  6 hours ago   64 comments top 12
1
ergot 5 hours ago 3 replies      
For those wondering how to create your own custom Tor onion adress, look no further than: https://timtaubert.de/blog/2014/11/using-the-webcrypto-api-t...

And for those who think Protonmail are the only service with a custom address, think again, because Facebook has one too: https://facebookcorewwwi.onion/

You can find a tonne more at this list:

https://github.com/chris-barry/darkweb-everywhere/tree/maste...

And staying on topic, Mailpile has their own .onion

https://raw.githubusercontent.com/chris-barry/darkweb-everyw...

2
mike-cardwell 6 hours ago 3 replies      
This is not quite as good as riseup.net's onion support as it doesn't include SMTP services. See:

https://riseup.net/en/security/network-security/tor#riseups-...

 mike@snake:~$ torsocks telnet wy6zk3pmcwiyhiao.onion 25 Trying 127.42.42.0 Connected to wy6zk3pmcwiyhiao.onion. Escape character is ^]. 220 mx1.riseup.net ESMTP (spam is not appreciated)
So if your mail service supports onion addresses, then you can just replace "@riseup.net" in a users email address with "@wy6zk3pmcwiyhiao.onion".

Alternatively, your mail service could have explicit configuration in place to identify @riseup.net addresses and route them to wy6zk3pmcwiyhiao.onion instead of the normal MX records. I do this with Exim by utilising Tors TransPort+DNSPort functionality and then adding the following Exim router:

 riseup: driver = manualroute domains = riseup.net transport = remote_smtp route_data = ${lookup dnsdb{a=wy6zk3pmcwiyhiao.onion}}
Obviously this would be better if there was a way to dynamically advertise the onion address in the DNS instead of having to hardcode it in Exim.

[edit] - If they co-ordinated, Riseup and Protonmail, and potentially other similar privacy respecting mail services could send all their traffic over each other via Tor. If you work for either of these companies, please consider the possibility of looking into this sort of relationship.

3
tptacek 3 hours ago 2 replies      
If you are so threatened that you feel the need to use a Tor hidden service to reach your email provider, you should know that email --- "encrypted or not" --- provides the worst protection of all possible encryption messaging options. Don't use email for sensitive communication, and certainly don't rely on the security features of any email provider for your own safety.
4
a3n 4 hours ago 3 replies      
From ignorance, why would I (a non-interesting person in a nominally free country, with non-interesting interests that could nevertheless become interesting depending on political shifts and shit) want to use this hidden service, rather than plain old ProtonMail?
5
_eht 3 hours ago 3 replies      
Can anyone speak to their like/dislike of ProtonMail vs Fastmail. I currently use Fastmail and I'm happy, but always looking for something better.
6
jron 5 hours ago 1 reply      
Last I checked, ProtonMail required SMS verification for account creation.

Edit: When using Tor

7
ortekk 58 minutes ago 0 replies      
I wish ProtonMail would offer more email aliases with its paid plans - credentials reuse is what often allows to snoop on someone's online identity. That would really boost its value in terms of privacy.
8
benwilber0 1 hour ago 1 reply      
I always get the feeling that these kinds of services are NSA honeypots. Whether intentionally or unintentionally.
9
dgiagio 4 hours ago 2 replies      
Could someone expand how an email service over Tor helps when the messages you sent to others still go through SMTP protocol (even with TLS) and is stored/relayed in/to unprotected severs?
10
lazyeye 3 hours ago 2 replies      
Why the funny domain name? Is there any technical reason why they cant use protonmail.onion?
11
tghw 2 hours ago 1 reply      
If only ProtonMail could import old mail, I would be giving them money.
12
akerro 4 hours ago 1 reply      
6
How Do You Measure Leadership? ycombinator.com
135 points by craigcannon  4 hours ago   58 comments top 20
1
freddyc 1 hour ago 2 replies      
Over the years a test I've often used is asking "how does this person respond to being challenged/questioned?" A great leader tends to embrace the fact that someone is asking "why" and uses it as an opportunity to learn and potentially convert the questioning party (if they're questioning something in the first place, then you haven't nailed it 100%). A weak leader who doesn't have confidence in their abilities sees the challenge as a personal attack and reacts in a knee-jerk fashion (often, though not always, resulting in a termination). If you can't reconcile differing opinions and convert those with opposing views to you then you're doomed as a leader and odds are your company/team will experience high turnover.

Obviously there's a whole range of other traits that make great leaders, but I've found people that fail this test are almost always terrible leaders who others don't want to work for.

2
imh 1 hour ago 1 reply      
I'm sad not to see an emphasis on giving a shit about the lives of those people you're leading. Personal development, career development, family, fun, etc. These are all hugely important to people outside of whatever widgets they are contributing to. A good leader should care about helping the people they lead achieve their goals, and not just in the sense of finding people who are willing to pretend their goals align with the widgets.
3
ChuckMcM 2 hours ago 2 replies      
It is always interesting when someone who believes themselves to be a great leader, discovers that they are not. And since many of the traits that make great leaders, self awareness, humility, honesty, Etc. are missing in these folks, the world around them sort of explodes when that realization hits. In my experience it is a time when they are most likely to embrace 'leadership through politics.' It is always a strong signal that it is time to distance oneself from the faux leader's area of influence.
4
claar 2 hours ago 2 replies      
Also a great read along these lines is "The 21 Irrefutable Laws of Leadership" by John Maxwell, which I'm close to finishing currently.

Maxwell claims that leadership is influence, not authority. When I became a co-founder, I thought that made me a leader. But as PG's excellent post and Maxwell affirm, leadership is quite distinct from positional authority -- and is much more difficult to attain.

Speaking directly to this post, I found that rating myself against Maxwell's "21 laws" was a sobering and likely accurate gauge of my leadership ability.

5
Cyranix 1 hour ago 0 replies      
RE: "Clarity of Thought and Communication" I have worked at a couple of places that put a lot of effort into internal communications, selling employees on upcoming product changes they'll be working on, but failed to acknowledge the existing significant problems that everyone saw and that were repeatedly punted on. Being able to give a slick pitch is not sufficient for this leadership criterion; the narrative must be "credible" (as mentioned rather briefly in the article). Is it just me, or do other people find themselves frustrated at internal messaging that is self-consistent but not grounded in reality?
6
ktRolster 40 minutes ago 0 replies      
There's kind of a difference between a manager and a leader.

Manager - Makes sure things get done. If someone quits, finds a replacement, etc. We should all be managers of ourselves.

Leader - A person that employees are willing to follow. Makes the group into a team, working together. Actually cares about the members in his team, protects and defends them. Fights to get them raises, etc.

7
edw519 2 hours ago 4 replies      
I've had 80 bosses. 77 of them sucked. I would march through hell to help the other 3 get something done. For me that pretty much sums it up. All the rest is fluff.

FWIW, OP's 3 metrics:

 1. Clarity of Thought and Communication 2. Judgment about People 3. Personal Integrity and Commitment
Those should be necessary but not sufficient characteristics of every person in your organization.

EDIT, response to walterbell & el_benharneen about what made the 3 different (in no particular order):

 - They always told the truth (to everybody). - They knew their stuff (tech, system, user domain). - They figured out the right thing to do. - They communicated often and flawlessly. - They did whatever it took to get the right thing done. - They smiled almost all the time. - They made each other person feel special. - They made work fun. - They were always teaching something. - They called bullshit instantly. - They protected their team. - They inspired us by showing how good things could be.

8
sbierwagen 2 hours ago 2 replies      

 It is based on observations I made when working closely with four leaders that I consider extraordinary: Ed Catmull (Pixars founder), Steve Jobs (Pixars CEO), John Lasseter (Pixars Chief Creative Officer), and Bob Iger (Disneys CEO).
All four of these guys were involved in wage-fixing, which cost their companies $415 million. https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_L...

So, "extraordinary" in the sense of being extraordinarily unprofitable.

9
curiouslurker 12 minutes ago 1 reply      
Great read but did Steve Jobs really have personal integrity? He was famously double faced, manipulative and as petulant and petty as a child, often settling personal scores with business decisions.
10
prewett 2 hours ago 0 replies      
Leadership is people development. So, how many people have you developed? How many times have you reproduced yourself?

If you want grow your company, you are going to have to reproduce yourself so that the new you is doing the old role so that you can step into the new one, or perhaps relieve yourself of excess roles. That role may or may not have the title you had when you were doing it, however. You might be titled "CEO" when you are leading a team of 5 people, but you will reproduce yourself as "Team Leader" as you start adding teams.

Merely having clarity of thought and integrity does not make you a leader, it makes you a great team member. Merely having good people judgement makes you a good manager, not necessarily a good leader. Developing people makes you a good leader. It's hard to do that without the other three, though.

11
Macsenour 2 hours ago 0 replies      
Being a boss and being a leader are two very different things.

That may seem obvious to those that understand it, those that don't will think I'm nuts. As a scrum master I have been a leader at every company where I have worked. I have never had anyone report to me in those same companies, aka not a boss.

12
6stringmerc 2 hours ago 0 replies      
Leadership can be measured by simply stripping away all external factors that could distort the ability to quantify the Individual Leadership Quotient. A few such elements would include, but are not limited to: A) Talent and Aptitude of Followers, B) Macro Economic Conditions, C) Luck, D) The Weather...basically I think the notion of Leadership is very elastic and, more often than not, highly circumstantial.

What is good Leadership for a bunch of grunts storming a beach in combat isn't objectively comparable to good Leadership for a bunch of teenagers in a classroom environment. There are some "Characteristics" I think that can be described and discussed as a useful musing on the concept, but it has to be qualitative not quantitative from my perspective.

13
alfonsodev 22 minutes ago 0 replies      
Two things:

By the profesional/personal growth of each team member and by the harmony of the group.

14
zzalpha 4 hours ago 1 reply      
All the qualities they identify, here, are, in my mind, absolutely necessary (though not sufficient) for someone to be a good leader.

But, despite the title of the article, none of them are objectively quantifiable.

15
pjmorris 1 hour ago 0 replies      
A leader is best when people barely know he exists, when his work is done, his aim fulfilled, they will say: we did it ourselves.

- Lao Tzu quote opening 'Becoming a Technical Leader' by Jerry Weinberg

16
ThomPete 2 hours ago 0 replies      
You don't. You experience it.
17
eruditely 3 hours ago 0 replies      
You should probably follow Nassim's idea of not trying to measure x (leadership) vs output of leadership f(x) and try to measure the exposure and how it impacts it. Since probably the most significant effort has been pulled into probability theory and trying to get a measure of x that's probably the place to look.

And you would NOT try to measure it as a point estimate as many have reminded us, you would try to set bounds lower&upper.

18
losteverything 3 hours ago 2 replies      
Getting people to do things they don't want to do.

I believe from Jack Welch

19
ajmarsh 4 hours ago 0 replies      
By the output of the employees that are lead/managed?
20
arca_vorago 2 hours ago 0 replies      
Leadership is intangible, hard to measure, and difficult to describe. It's quality would seem to stem from many factors. But certainly they must include a measure of inherent ability to control and direct, self-confidence based on expert knowledge, initiative, loyalty, pride and sense of responsibility. Inherent ability cannot be instilled, but that which is latent or dormant can be developed. Other ingredients can be acquired. They are not easily learned. But leaders can be and are made. General C. B. Cates,19th Commandant of the Marine Corps

Ingrained to my brain from my Marine Corps days is the acronym JJDIDTIEBUCKLE as the list of leadership traits, and it has served me well since, although in the civilian world I have had to lower my expectations of others around me in having even a fraction of such traits.

Relevant reading for those curious about how the Corps approaches leadership: http://www.tecom.marines.mil/Portals/120/Docs/Student%20Mate...

7
Stepping into math: Open-sourcing our step-by-step solver socratic.org
427 points by shreyans  9 hours ago   96 comments top 16
1
analog31 6 hours ago 6 replies      
This seems interesting because it addresses the issue of "show your work." Many years ago, I spent a semester teaching the freshman algebra course at the nearby Big 10 university. This is the course that you take if you don't get into calculus. My students were bright kids -- they were all admitted to the state flagship school -- but not mathematicians.

There was huge variation in the preparation that kids brought with them from high school. In particular, very few of them understood what "show your work" means. They were told "show your work," but nobody told them what it really entails. Is it just to provide evidence that you did some work, to deter cheating, or is it something else? Many of my students were taught "test taking skills" such as the guess-and-try method. So on one exam, a question was:

x^3 = 27

One student's work:

1^3 = 1

2^3 = 8

3^3 = 27

Answer = 3

I asked the professors to tell me what "show your work" means. None of them had a good answer! These were the top mathematicians in the world. I wanted to talk with my students about it, but I'm not even sure that my own answer was very good.

But if we did well in math, then we just know what it means. It's not just evidence that you did the work. It doesn't mean "turn in all of your chicken scratch along with the answers." It means something along the lines of supplying a step-by-step argument, identifying the premises and connecting them with the conclusion, in a language that is "accepted," i.e., that mimics the language of the textbook / teacher. In fact, the reason to read the textbook and attend lectures, is to learn that language. (It's not so different in the humanities courses).

At least, that's my take on it, as just one teacher with one semester's worth of experience.

In my view, a problem solving tool that actually addresses the process of building the argument and not just determining the answer, would be beneficial to students.

2
jorgemf 7 hours ago 1 reply      
Some years ago I tried to do something a bit more complex: http://telauges.appspot.com/mathsolver/

My idea was to use planning and A* search to solve any type of math problem, even create probes for things like the quadratic equation https://en.wikipedia.org/wiki/Quadratic_equation . I gave up after learnt the search space was so big for it that it was impossible to solve. If I had to do it today I will explore deep learning as heuristic, but I think it probably wont work.

I always like to see this type of projects, I hope they succeed where I failed.

3
tgb 8 hours ago 6 replies      
Has anyone done a study to see if this kind of aided solving actually helps students learn? I'm worried that "Eh, I'll just write this solution down today, I'm sure I'll learn it tomorrow" is what's happens.

Awesome software though.

4
stdbrouw 6 hours ago 0 replies      
Worked on something like this as a hobby project a while ago, but to avoid the complexities associated with solving arbitrary exercises, instead I had it set up as an algebra exercise generator: you start with the solution, which you then (algorithmically) obfuscate by splitting terms and recombining things for a couple of rounds. Never got around to finishing it, but the neat thing is that you've already generated one possible way to solve the problem, it's just how you generated the exercise in reverse.

Another thing that's quite easy to do is to check intermediate steps in a solution for equivalence. You don't even really need CAS, just brute force the problem by probing the equations: set all variables to randomly chosen values, n times and if the sets of results are the same for both equations, you're good.

Anyhow, Socratic looks great and a great deal more advanced and useful than what I came up with, so kudos!

5
benbristow 7 hours ago 4 replies      
I'm jealous of kids these days... homework would've been so much easier with this.

You could always use a calculator but the whole 'show your own working' catch meant you had to do it all manually. Not any more!

6
yequalsx 3 hours ago 2 replies      
It's a nice program and I can see it being both helpful and harmful. From my perspective, as a teacher of mathematics at a community college, students are unwilling to engage in thought about a problem. If they can't see the solution in a few minutes then they want to look at a complete solution. Mostly they are not willing to struggle through a problem.

I vacillate on whether, with the advent of computer algebra systems, it is necessary for students to master algebraic manipulations. I started to think that conceptual questions are better.

For instance, give me an example of an equation with no solution. Explain how a baseball player can have the highest batting average the first half of a season and in the second half of a season but not have the highest overall average. Draw the graph of a function defined on [0, 1] but has not maximum or minimum.

Students can't do those types of problems either. They are very frustrating problems for students because it requires you to really think about what the words mean and to think of extreme situations. So I've reverted back to the traditional style of teaching math. Manipulation of symbols.

7
therealmarv 8 hours ago 1 reply      
Does anyone know if there is a good open source library for making equations (Latex, MathML) out of pictures like in their demo?
8
chriswarbo 6 hours ago 0 replies      
Very interesting work, and well-explained in the post.

Like many others here, I suppose that in it's basic form this would mostly be used for cheating on homework; although it would certainly be useful for those (few?) students who are truly motivated to self-learn the material, rather than just pass the tests.

One thing which springs to mind is "Benny's Conception of Rules and Answers in IPI Mathematics" ( https://msu.edu/course/cep/953/readings/erlwanger.pdf ), which shows the problem of only focusing on answers, and on "general purpose" problem sets. Namely that incorrect rules or concepts might be learned, if they're reenforced by occasionally giving the right answer.

I think it would be interesting to have a system capable of some back-and-forth interactivity: the default mode would be the usual, going through some examples, have the student attempt some simple problems, then trickier ones, and so on.

At the same time, the system would be trying to guess what rules/strategies the student is following: looking for patterns, e.g. via something like inductive logic programming. We would treat the student as a "black box", which we can learn about by posing carefully crafted questions.

Each question can be treated as an experiment, where we want to learn the most information about the student's thinking: if strategies A and B could both lead to the answers given by the student, we construct a question which leads to different answers depending on whether A or B were used to solve it; that gives us information about which strategy is more likely to be used by the student, or maybe the answer we get is poorly explained by A and B, and we have to guess some other strategies they might be using.

Rather than viewing marking as a comparison between answer and a key, we can instead infer a model of the domain from those answers and compare that to an accurate model of the domain.

We can also use this approach the other way around, treating the domain as a black box (which it is, from the student's perspective) and choosing examples which give the student most information about it.

9
Steeeve 8 hours ago 1 reply      
Now... the only thing remaining is to translate this to common core :).

I say that in jest, but doing so would make common core much easier for parents AND teachers to grasp. There's an enormous divide between those who get it and those who hate it, and providing parents/teachers with something that would help them understand the benefits of common core concepts would be a gigantic win.

10
aidos 8 hours ago 0 replies      
That's so cool.

Reminds me of how different the learning experience is now. When we were at school (80s/90s), there was nowhere to turn if you didn't have the answer. My parents had an Encyclopedia Britannica set, so at least there was a paragraph to go on. It's amazing how good you became at fleshing out that paragraph into an essay :-)

11
gravypod 8 hours ago 1 reply      
Now that this exists I think it's worth creating an opensource version of the TI-Nspire for engineers & mathamaticians. Something based on cheap hardware, runs linux, and can implement this + a theorum prover to basically make the most handy lab calculator.
12
poseid 6 hours ago 0 replies      
that feels like a nice application of AI in a way. we often use a computer that can help in making a plan (e.g. a kind of map or "steps" as here). this might be nice to help understand problem solving in general. also, nice to see the project is in javascript, that means quite a few non-professional programmers could learn from it.
13
JotForm 7 hours ago 0 replies      
This is such an inspiring software.
14
MichaelBurge 5 hours ago 1 reply      
People here keep saying this will change learning and be good for the students, but the only real difference is it's open-source. You can already get step-by-step solutions for more types of problems from Wolfram Alpha, and you can already get API access if you're a 3rd-party developer who needs it:

http://www.wolframalpha.com/input/?i=2*y+-+x+%3D+(8+*+x+%2B+...

I don't think it will have any real effect.

15
StefanKovachev 7 hours ago 1 reply      
16
GrumpyNl 9 hours ago 0 replies      
It looks like Sheldon came through.
8
Neural Architecture Search with Reinforcement Learning arxiv.org
32 points by saycheese  2 hours ago   8 comments top 4
1
ericjang 1 hour ago 1 reply      
The primary author is a Google Brain Resident (https://research.google.com/teams/brain/residency/). The Brain Residency is a really great program for starting a career in Deep Learning and ML research, and I'm really impressed by how quickly these new researchers churn out great work like this.

disclosure: I work at Google Brain

2
gallerdude 1 hour ago 2 replies      
I think this is the way that Neural Networks achieve some modicum of generality - chaining them together.

Let's say you have a robot that you want to grab a can of beer off the counter. You say "grab that beer" and point to it. The first neural network interprets the speech and visual input. A second neural network chooses the proper neural nets to continue the task based on the information interpreted from the first net - it picks one for walking and one for grabbing.

3
deepnotderp 25 minutes ago 1 reply      
This is pretty old, and neural nets can train neural nets too (better than humans as usual). Check learning to learn w/ gradient descent by gradient descent
4
saycheese 1 hour ago 0 replies      
Ran across this research reading this article, "AI Software Learns to Make AI Software" - which is already posted here:

https://news.ycombinator.com/item?id=13436195

9
How Space Weather Could Trigger a Future Economic Crisis bloomberg.com
41 points by JumpCrisscross  3 hours ago   21 comments top 5
1
flippmoke 1 hour ago 1 reply      
As a background, I used to work on the FAA - WAAS[0] a system that provides more accurate GPS and more importantly provides realtime integrity reports of GPS for users. A big part of my work was studying ionospheric storms and how they affected GPS. We made the majority of analysis from collected data from the late 90s and onward, the sad truth is that we have no real idea of what a massive solar event would do to satellites and the earth.

Satellites in general have a very hard time discharging large amounts of current, because there simply is no ground and the possibility of a Carrington Event[1] in the modern age is simply frightening. An event of this size today could possibly knock out thousands of satellites at once -- including the entire GPS constellation.

The effects on earth could be very damaging too, what would happen we aren't quite certain. However, you could see arcs from power lines or any long distance wire as it would provide easy paths for electrons. This also could affect any computers much like an EMP blast. We could be looking at a large percentage of all electronics broken. So we might suddenly have large areas, with no power, no electronics, and no communication.

After spending lots of time learning about the science behind these storms, to me this is the stuff of nightmare fuel.

[0]: https://en.wikipedia.org/wiki/Wide_Area_Augmentation_System

[1]: https://en.wikipedia.org/wiki/Solar_storm_of_1859

2
brentm 2 hours ago 3 replies      
> A world without power because of damaged transformers would become economically stagnant

Economic stagnation would probably be the least of our problems. The cascading issues from a world without power for an extended period of time are incredibly far reaching. It would most certainly be a form of chaos.

3
neaden 1 hour ago 1 reply      
The descriptions of effects of the 1859 solar storm on the telegraph system[0] are pretty crazy. It started fires, shocked operators, powered unplugged machines, etc. Can't imagine what that would be like with all of our modern machines.

[0]: http://www.history.com/news/a-perfect-solar-superstorm-the-1...

4
lutorm 2 hours ago 2 replies      
Am I right in thinking with advanced warning this could be mitigated by simply shutting down the power grids for a few hours? That would be a big deal, but clearly not as bad as taking it out.
5
crpatino 2 hours ago 3 replies      
I find it interesting that Bloomberg gets all hyped out about the consequences of Outer Space events that may eventually happen and we do not have any way to prevent, but cheerfully ignores the consequences, - and the very tangible costs, - of Climate Change here on Earth.

"We cant dodge, prevent or suppress solar flares. But we can increase funding for early-warning systems such as the Space Weather Prediction Center in Boulder, Colorado."

Oh, wait. Nevermind... I get it now.

10
FizzBuzz in J, Explained wycd.net
64 points by wyc  3 hours ago   16 comments top 8
1
RodgerTheGreat 1 hour ago 0 replies      
Paraphrasing an earlier Reddit post, here's how it might look in K3.

Generate a range of numbers up to 20 (for brevity), inclusive:

 1+!20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Apply a function ({x!/:3 5}), which takes an element modulo 3 and 5, to the range:

 {x!/:3 5}1+!20 (1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0)
Taking the negation (~) shows us the places the elements divide evenly:

 {~x!/:3 5}1+!20 (0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1)
Convert these into base-2 indices:

 {2 _sv~x!/:3 5}1+!20 0 0 2 0 1 2 0 0 2 1 0 2 0 0 3 0 0 2 0 1
We can then use those indices to select from a list. Note the addition of a map ('), because if we continue performing all these operations in parallel we won't have access to x as some particular item of the list:

 {(x;"Buzz";"Fizz";"FizzBuzz")2 _sv~x!/:3 5}'1+!20 (1;2;"Fizz";4;"Buzz";"Fizz";7;8;"Fizz";"Buzz";11;"Fizz";13;14;"FizzBuzz";16;17;"Fizz";19;"Buzz")

2
klibertp 8 minutes ago 0 replies      
There's a blog post about FizzBuzzBazz (note the third value) in LiveScript, I posted a comment there, with J version[1], looking like this:

 fu=:(3&|),(5&|),:(7&|) g1=:(,&'zz')"1>;:'Fi Bu Ba' ((0=fu i.101){"1(3 1$<''),.(,.<"1 g1)),<"0 i.101
Well, what else can I say? J is really fun to play with.

It's also not true that it cannot be maintainable or readable. J crazy parsing rules and other language features make it very flexible, on the level of Lisp, TCL, PERL, Smalltalk or Io. This means it can be a completely unreadable mess, as well as a readable and maintainable code, depending on who writes it. I made an attempt at writing readable - articulate - J: https://klibert.pl/posts/literate_j.html

[1] http://livescript.net/blog/fizzbuzzbazz.html#comment-1145818...

3
chx 12 minutes ago 0 replies      
:('Equilateral';'Isoceles';'Irregular';'Impossible'){::~3&,@<:@#@~.{~2&*@(>./)<+/

Belongs in https://www.stilldrinking.org/programming-sucks

In other words: I would not want to work with a language like this because I know most of the work is maintenance and it's hell if you need to work with something this sigil heavy.

4
bloaf 40 minutes ago 0 replies      
I have had several opportunities to write Mathematica code in a style similar to this. Specifically, building up functions to map over entire lists rather than iterating over them.

Each step feels very intuitive and satisfying. But when you just look at the end result of any non-trivial example, it nearly always looks unreadable. I found myself needing to work quite hard to understand functions I myself had written, even if I was only interrupted by lunch.

5
101km 2 hours ago 1 reply      

 > 1{::(('FizzBuzz';'Fizz';'Buzz';":) 1337) Fizz
Is equivalent to:

 > ['FizzBuzz', 'Fizz', 'Buzz', 1337][1] Fizz
In a language with less foreign syntax.You can make it more familiar by putting the index on the right side of the array with '~':

 (('FizzBuzz';'Fizz';'Buzz';":) 1337){::~1 Fizz
This is the modulo bit:

 (0 i.~15 3 5|])

6
throwaway7645 2 hours ago 1 reply      
I really like the concepts in APL/J. It's probably why I'm so excited to see more array oriented support in Perl6. Keep up the postings.
7
bkase 2 hours ago 0 replies      
I found your post very compelling. I have tried messing around with the APL derivatives (J and K and Q), but never created anything substantial. I didn't know you could view the AST like that, and it did really help in your explanation. I'm definitely going to check out an APL again! Thank you for the great post!
8
empath75 2 hours ago 1 reply      
'explained' I read it twice and still don't understand what it's doing.
11
How Were Building a Business to Last cockroachlabs.com
85 points by orangechairs  4 hours ago   12 comments top 6
1
coffeemug 5 minutes ago 0 replies      
RethinkDB [ex-]founder here.

The problem wasn't that we (and presumably others) didn't plan for open-core/cloud. We did, but there are structural problems in the market that prevent this from working.

Open-core didn't work because the space is so crowded with high-quality options that you have to give away enormous amount of functionality for free to get adoption. Given how complex distributed database products are, by the time you get to building a commercial edition you're many years in and very short on cash.

Cloud didn't work because AWS/GCloud have enormous moats of pricing and brand recognition. They drive margins down to epsilon, and if your product sees meaningful adoption in the industry they launch their own service and take all your customers.

2
filereaper 6 minutes ago 0 replies      
I'm really looking forward to the 1.0 release of CockroachDB. S3 and GCS were mentioned as configurable storage sinks, are there plans to release with Azure Blob as well?
3
sply 24 minutes ago 0 replies      
This can be seen as a kind of response to concerns about the survival of other open source databases raised after closing RethinkDB and its recent postmortem https://news.ycombinator.com/item?id=13421608
4
sciurus 27 minutes ago 1 reply      
> In 2017, any product whose core capabilities cannot scale without requiring a commercial license is probably setting the bar too low.

Is this a dig at InfluxDB for removing clustering from their open source version?

5
hendzen 1 hour ago 3 replies      
It looks like you are trying to replicate the MongoDB Inc. business model (regardless of major differences in the actual product being offered).

MongoDB offers a commercial version of their product with enterprise features (encryption at rest, LDAP auth, etc) and support - MongoDB Enterprise.

Additionally they also offer managed, cloud hosted MongoDB deployments - MongoDB Atlas.

Over the last few years the valuation of MongoDB, Inc. has been slashed by institutional investors such as Fidelity and BlackRock. While they haven't had mass layoffs or some other negative corporate event, they have clearly had some difficulty making their (and apparently your) business model work.

Do you agree that this is a fair comparison? And what do you makes CockroachLabs more likely to succeed with this business model than MongoDB?

6
lclarkmichalek 3 hours ago 2 replies      
Do you envision features moving from CCL to APL? For instance, should the database ecosystem change such that everyone and their mother are offering row level geo partitioning in OSS databases, would it be likely that that feature would become APL licensed?
12
Quick, How Might the Alien Spacecraft Work? backchannel.com
62 points by D_Guidi  4 hours ago   18 comments top 9
1
ansible 57 minutes ago 1 reply      
The problem with interstellar travel in general is that it is so expensive, in terms of energy usage. I speculate that even for Kardashev type 3 civilizations, shipping matter around will be impractical.

The future is software. For entities that live purely as software, and aren't so picky as us meatbags are about continuity of consciousness, "traveling" from place to place by sending data is the most practical method.

All that is needed is that the destination be prepared appropriately. So that means firing off a kilogram (or so) of self-replicating molecular nanotechnology which can build the infrastructure you'd want to live in (compute on) at the destination.

I haven't seen any indications yet with physics that something more practical is in the offing. There's still a lot we don't understand (GUFT, dark matter / energy), but there doesn't seem to be much hope for life as we know it to flit around the universe faster than light.

2
dghughes 2 hours ago 3 replies      
It always seems to involve mastering gravity yet gravity is so incredibly weak compared to magnetism. Plus "alien spacecraft" aka UFOs always seem to output a huge amount of light so maybe gravity and light are linked in the process or mechanism for whatever makes the thing operate.

I like the article how it mentions getting the science believable or at least not terrible wrong. But what I dislike as much as that is whenever the science parts are dismissed. Usually a scientist in a movie or TV show tries to explain something the lead character rolls their eyes as if it were boring everyone laughs.

How can we encourage kids that science is interesting if it's treated as a bore or a joke and ridiculed? I know it's meant as a writing method to make the lead person seem dumb compared to the smart scientist but it sends a bad message that science is boring.

3
xenophonf 1 hour ago 0 replies      
This is a duplicate of Wolfram's original blog post:

http://blog.stephenwolfram.com/2016/11/quick-how-might-the-a...

The previous discussion (about two months ago):

https://news.ycombinator.com/item?id=12940364

4
mynegation 12 minutes ago 0 replies      
Asking someone with better understanding of physics. Wolfram talks about space as a network. But would not that mean that there is an absolute reference frame? Relativity theory does not need it (not sure if it disproves that there is no such thing), and from what I know no such frame was found. How does that reconcile?
5
twic 22 minutes ago 0 replies      
> Gauss suggested back around 1820 that one could carve a picture of the standard visual for the Pythagorean theorem out of the Siberian forest, for aliens to see.

https://en.wikipedia.org/wiki/Gauss's_Pythagorean_right_tria...

Radio waves? Golden discs? Nah son, colossal geometric wheatfields and a few big hedges is what you want!

6
kordless 3 hours ago 2 replies      
Yesterday, I watched Bruno Vassel fly his glider in the Rocky Mountains for nearly 2 hours on Youtube. I hypothesize someday we will be able to extract energy from the Ether to power our own spaceships in a similar way to gliding. That's not to say we won't need to add our own energy to get somewhere specific!
7
wyldfire 2 hours ago 0 replies      
> The movie-makers were giving Christopher raw data, just like in real life, and he was trying to analyze it....> In the final movie, the screen visuals are a mixture of ones Christopher created, ones derived from what he created, and ones that were put in separately.

I wonder -- did he synthesize the handwriting samples under analysis or was that from the film's conventional creative team? I read Chiang's story after having seen "The Arrival" and IIRC the descriptions weren't anywhere near as clear about the circular nature of the sentences.

8
pmontra 2 hours ago 0 replies      
There have been a number of posts about this movie on HN, many of them interesting https://hn.algolia.com/?query=Arrival&sort=byDate&prefix=fal...

Some of them address the linguistic side of the movie.

9
viach 2 hours ago 0 replies      
Infinite Improbability Drive, of course!
13
A trip down the League of Legends graphics pipeline riotgames.com
189 points by adamnemecek  7 hours ago   72 comments top 8
1
ggregoire 5 hours ago 5 replies      
Several years ago, if I had to choose a video game company where I would loved to work as a software developer, it would have been Blizzard Entertainment.

Now, it would definitely be Riot Games.

As player, spectator and developper, I'm so impressed by their work, the evolution of their game since the release in 2009, how they listen their community [1], try new things and, most of the time, admit their mistakes (e.g. removing the ranked solo queue in 2016) and rollback [2] or improve their changes when it's needed.

Their game is the most played game ever. ~100 millions active players every month [3].

Their game is also the most watched game ever. ~40 millions unique viewers for the finals of the last World Cup in September [4]. Hundreds of thousands of viewers on Twitch every day (without even counting the Asian region). They have revolutionized the "e-sport" in Occident. I don't play anymore (for now) but I'm still watching most of the professional games because they created an accessible, passionate and entertaining game to watch (for those interested, the new season just started in Korea, China and Europe, and tomorrow in North America [5]).

Really inspiring. Congratz Riot Games.

[1] https://www.reddit.com/r/leagueoflegends

[2] http://na.leagueoflegends.com/en/news/game-updates/competiti...

[3] http://www.riftherald.com/2016/9/13/12865314/monthly-lol-pla...

[4] http://www.espn.com/esports/story/_/id/18221739/2016-league-...

[5] http://www.lolesports.com/en_US

2
adamnemecek 7 hours ago 8 replies      
Few things consistently blow my mind as insane graphics demos

https://www.shadertoy.com/view/4dfGzS (or basically anything on that site)

How is that 400 lines of code.

Or this one which even generates the sound on the GPU

https://www.shadertoy.com/view/4ts3z2

With the wide adoption of WebGL, it's a good time to get involved in graphics. Furthermore, GPUs are taking over esp. with the advent of machine learning (nvidia stock grew ~3x, amd ~5x last year). The stuff nvidia has been recently doing is kinda crazy. I wouldn't be surprised if in 15 years, instead of AWS, we are using geforce cloud or smth, just because nvidia will have an easier time building a cloud offering than amazon will have building a gpu.

These are some good resources to get started with graphics/games

# WebGL Programming Guide: Interactive 3D Graphics Programming with WebGL

https://www.amazon.com/WebGL-Programming-Guide-Interactive-G...

Historically, C++ has definitely been THE language for doing graphics but if you are starting these these, you would have to have really compelling reasons to start with C++ and not JavaScript and WebGL. And that's coming from someone who actually likes C++ and used to write it professionally.

# Book of Shaders

https://thebookofshaders.com/

# Game Programming Patterns

http://gameprogrammingpatterns.com/contents.html

https://www.amazon.com/Game-Programming-Patterns-Robert-Nyst...

HN's own @munificent wrote a book discussing the most important design patterns in game design. Good book applicable beyond games.

# Game engine architecture

https://www.amazon.com/Engine-Architecture-Second-Jason-Greg...

# Computer graphics: Principles and Practice

https://www.amazon.com/Computer-Graphics-Principles-Practice...

This is more of college textbook if you'd prefer that but the WebGL one is more accessible and less dry.

# Physically Based Rendering & Real-Time Rendering

These discuss some state of the art techniques in computer graphics. I'm not going to claim to have really read them but from what I've seen they are very solid.

https://www.amazon.com/Computer-Graphics-Principles-Practice...

https://www.amazon.com/Physically-Based-Rendering-Third-Impl...

3
che_shirecat 5 hours ago 2 replies      
I'm always impressed by how well hugely popular games like League and CSGO run on bottom-tier machines. Maybe it really isn't all that impressive and its just the number of poorly optimized bloatfests out there that skews my perception
4
seanalltogether 5 hours ago 0 replies      
It's interesting to me how a few of the effects are still baked into the textures, which means they still can't let the user rotate their camera. Most people feel there is an advantage to playing on the bottom side due to camera perspectives, and allowing users to rotate their cameras could nullify this disadvantage.
5
mastax 3 hours ago 0 replies      
See also the fantastic graphics studies produced by Adrian Courreges.

http://www.adriancourreges.com/blog/2016/09/09/doom-2016-gra...

6
sputknick 3 hours ago 3 replies      
Along the lines of this, can anyone recommend a blog post, or an article that demonstrates the mathematics that goes into game design? I want to show my 8 year old who wants to build games one day why math is important. Maybe something that shows how matrices are used to show placement, or vectors to drive movement?
7
OnACoffeeBreak 7 hours ago 0 replies      
The author answered a few question in this /r/gamedev thread: https://www.reddit.com/r/gamedev/comments/5otq0r/a_trip_down...
8
ryandrake 7 hours ago 0 replies      
Title is a nice nod to the classic _A Trip Down the Graphics Pipeline_ by the great Jim Blinn.
14
Learn Elixir with a Rubyist (IV) Types, Data Structures and Pattern Matching joaomdmoura.com
88 points by joaomdmoura  5 hours ago   39 comments top 3
1
rebelidealist 2 hours ago 5 replies      
As a Ruby programmer trying to take a look at Elxir, i still find it hard to read. Maybe functional languages take time to get used to but it doesn't seem as "clear" as procedure or oop style. I know the added benefit of speed and concurrency, but for most apps we build it is not needed for now.

Do any Ruby programmer find Elxir easier to write or read? Im genuinely curious to hear about this because a lot of Ruby devs are writing Elixir.

2
raarts 1 hour ago 2 replies      
Can the Ruby people please stop blogging about Elixir and explain things relative to Ruby? There are way too many 'Elixir-for-Ruby-programmers' tutorials on the net already.

Elixir is pretty great and enthusiasm is good but I feel this leaves a large group of people in the dark.

3
aecorredor 4 hours ago 12 replies      
If you had to make a comparison, what would be the pros and cons of using elixir vs node.js?
15
How Do I Declare a Function Pointer in C? fuckingfunctionpointers.com
96 points by jerryr  6 hours ago   36 comments top 9
1
dnquark 3 hours ago 3 replies      
The trick to reading crazy C declarations is learning the "spiral rule": http://c-faq.com/decl/spiral.anderson.html (here are more examples, with nicer formatting: http://www.unixwiz.net/techtips/reading-cdecl.html)
2
TheAdamist 2 hours ago 0 replies      
The new c++ alt function syntax talked about here:https://blog.petrzemek.net/2017/01/17/pros-and-cons-of-alter...

mentions replacing function declarations for

 void (*get_func_on(int i))(int); 
with

 auto get_func_on(int i) -> void (*)(int);
which looks a lot more readable to me.

3
kruhft 1 minute ago 0 replies      
One of the only reasons I had "The C Programming Language" on my desk when I was a C coder. The only thing I could never remember...
4
petters 2 hours ago 2 replies      
Just use the typedef. Even if you personally find the other variants readable, chances are that your peer reading your code doesn't.
5
cestith 4 hours ago 1 reply      
For anyone unable or unwilling to access that domain name for work purposes or filtering purposes, the linked page lists this alternative:http://goshdarnfunctionpointers.com/
6
hzhou321 2 hours ago 2 replies      
I never got used to having variables sandwiched inside a type. I know I am not supposed to suggest out-of-the-box, but why can't we add a new syntax, e.g.:

 return_type Fn(parameters) var; typedef return_type Fn(parameters) TypeName;
where Fn is a new keyword -- or not, if compiler understands dummy syntax -- (I would suggest &lambda; when using greek letters in code become norm).

It simplifies the C syntax a lot IMHO.

PS: now I am out-of-the-box, maybe this is better:

 Fn{return_type, param1, param2} *var;

7
bstamour 3 hours ago 3 replies      
This is one of those cases where I prefer C++

 template <typename Func> using function_ptr = add_pointer_t<Func>;
and now declarations are a bit more sane:

 void foo(function_ptr<void (int)> callback);

8
theophrastus 3 hours ago 1 reply      
Or if one doesn't have cdecl installed there's an online version[1] which has proven as a useful check on several occasions

[1] http://cdecl.org/

16
How Does the SQL Server Query Optimizer Work? xameeramir.github.io
29 points by xameeramir  3 hours ago   10 comments top 6
1
protomyth 55 minutes ago 1 reply      
The reality is that, even after more than 30 years of research, query optimizers are highly complex pieces of software which still face some technical challenges.

Yep. It still amazes me how queries can go from absolutely great to scanning tables with a row added in the wrong place. WITH RECOMPILE still scares the heck out of me. It can really help when distributions change, but it can also kill your performance with a bad query plan choice.

It addition to the hint show, you can also force certain indexes. This didn't seem to be as big a problem on SQL Server as its parent code Sybase ASE. Sybase made some really, really bad choices on big tables.

No amount of optimization will fix a bad schema design.

This is also a field where talking to people about how you built your machines and where the indexes are located can make a world of difference. If you cannot have the whole database on the fastest storage, then at lest put the indexes there.

2
rusanu 2 hours ago 1 reply      
I have a blog article that covers a bit more details: http://rusanu.com/2013/08/01/understanding-how-sql-server-ex...
3
adamnemecek 2 hours ago 0 replies      
If this tickles you fancy, you should check out this site http://use-the-index-luke.com or a book by the same author https://www.amazon.com/Performance-Explained-Everything-Deve...

He goes into great detail to explain how exactly SQL indices work and how to leverage this to write queries.

4
psi-squared 1 hour ago 1 reply      
There's a really nice article about the Postgres query optimizer, which goes into much more detail about the algorithms used (it's likely that at least the basic ideas are shared with SQL server, though I can't say for sure). I really like the exposition in this:

http://jinchengli.me/post/postgres-query-opt/

5
arnon 2 hours ago 0 replies      
There's very little actual data here...

Oracle has a bit more data on their website..https://docs.oracle.com/database/121/TGSQL/tgsql_optcncpt.ht...

6
morgo 1 hour ago 0 replies      
I have a guide that shows the equivalent for MySQL. It's kind of split between these two pages:

http://www.unofficialmysqlguide.com/introduction.html

http://www.unofficialmysqlguide.com/server-architecture.html

17
Distributing NixOS with IPFS sourcediver.org
184 points by robto  8 hours ago   53 comments top 10
1
chriswarbo 7 hours ago 1 reply      
I've been following these github issues for a while; fetching sources from IPFS seems like a great step forward for resiliency in general, and quite a natural one for Nix considering things are already immutable. Using IPFS as a binary cache is nice, as it would lower the maintainers' burden and make out-of-tree experimentation easier, i.e. without damaging the integrity of nixpkgs and cache.nixos.org.

I hadn't even thought about using the FUSE integration of IPFS, but it makes a lot of sense. Nix is a lazy language, and the nixpkgs repository basically defines one big value: a set of name/value pairs for every package it contains (as well various libraries for e.g. working with Python packages, Haskell packages, etc.). The only difference between installed/uninstalled packages is whether anything's forced the contents to be evaluated yet.

Likewise, an IPFS FUSE mount conceptually contains the whole of IPFS. The only difference between downloaded/undownloaded files is whether anything's forced the contents to be evaluated yet.

2
cjbprime 7 hours ago 4 replies      
Very cool.

One benefit of schemes like this that people don't talk about much is that, by no longer downloading from an expected place, you're removing the possibility for a compromised developer or server operator to selectively serve up malware to a targeted user. Instead you're getting the file over bittorrent and checking its hash, and you could gossip with other bittorrent clients to confirm that everyone's trying to get the same hash.

Compare with the state of the art in most software updates, which is that you connect to some download server and it could serve signed malware to people on its target list and probably no-one would notice.

(Schemes that use some of these techniques to take out the single point of malware-insertion have been called "Binary Transparency" schemes, as an analog to Certificate Transparency.)

3
twoodfin 6 hours ago 6 replies      
I've felt for a while that a standard, widely-implemented, distributed content-addressable store is one of the biggest missing pieces of the modern internet. Glad to see any steps in that direction.

I'll know real progress has been made when my browser can resolve something like:

cas://sha256/2d66257e9d2cda5c08e850a947caacbc0177411f379f986dd9a8abc653ce5a8e

4
civodul 6 hours ago 0 replies      
Nice project! Guix had a GSoC student working on binary distribution using GNUnet's file sharing component a while back: https://gnu.org/s/guix/news/gsoc-update.html . That has not led (yet?) to production code, but there might be ideas worth sharing.
5
k__ 6 hours ago 0 replies      
It's almost ridiculous how good the two fit together.

I had the feeling NixOS has a bit of a hard time get users and prove that it's a superior solution to ansible/docker/chef/etc. probably because of it's mediocre UX, haha.

But this would add another killer feature to it.

6
vog 7 hours ago 2 replies      
Very interesting development. It would be great to see NixOS as an early adopter for IPFS.

BTW, there is a small typo:

 IPFS is aims to create the distributed net.
It should be:

 IPFS aims to create the distributed net.

7
matthewbauer 3 hours ago 0 replies      
Stage 2 seems problematic at least the way I see it. Most users have at least a thousand derivations- is it possible to fuse mount each one?

Also: I think some people are unaware that Nix hashes are not content addressable. The best solution (which OP is proposing) is probably to use the .nar hashes in IPFS which is content addressable.

8
drdre2001 7 hours ago 0 replies      
This is a really great idea! Reminds me of other projects that are working on integrating IPFS with the Operating System: https://github.com/vtomole/IPOS
9
rkeene2 6 hours ago 1 reply      
Good to see other people are inventing AppFS ( http://appfs.rkeene.org/ ) :-)
10
citrusui 5 hours ago 1 reply      
I'm really excited to see what the future holds for IPFS! However, hosting websites with custom domains is not quite feasible yet. Using IPFS' DNS (IPNS) means you have to keep the IPFS daemon running constantly, or else the files will be purged within an hour.
19
Eventually Consistent: How to Make a Mobile-First Distributed System realm.io
74 points by astigsen  5 hours ago   22 comments top 7
1
bitwiseand 2 hours ago 3 replies      
I liked the article but I feel the CAP theorem is again misquoted; it often leads to misunderstanding. From the article -"but they would still confront the reality of the CAP theorem: your system can be consistent, available, or partition-tolerant, and you can only pick two"

The CAP theorem states that in the event of a network-partition you have to choose one of C or A. More intuitively, any delay between nodes can be modeled as a temporary network partition and in that event you have but two choices either wait to return the latest data at a peer node (C) or return the last available data at a peer node (A).

Edit: Switched C and A

2
bsaul 4 hours ago 2 replies      
After having built an app that was supposed to work offline, and resync when the connexion is up, using OT as well ( A basic one), i have the feeling that the general problem is hard, but if you stick to trying to solve your specific problem, things become much simpler. As an example OT operations can be either generic such as "update a property of an object" and then good luck solving conflicts, or very qualified with a business context such as "transfert money from account a to b", in which case the server-side code with which you synchronize will be much more able to resolve any issue case per case.
3
tlarkworthy 35 minutes ago 0 replies      
In Firebase we aim for causal consistency, which is between eventual and strong consistency. Remote clients obviously don't see an the same global ordering of events, they see their local mutations before peers (0 latency local writes). However, they do see remote mutations in the same order they occurred. So you never see "foo left the room" "foo said x", the remote mutations are always in the correct temporal ordering.
4
erichocean 1 hour ago 2 replies      
We shipped an app last year with Realm and are in the process of migrating off of itit just wasn't reliable enough. We regularly saw crashes deep in their library with highly concurrent usage. Extremely frustrating.

Adding something complex like OT on top of Realm's existing foundation would make me even less likely to use it in the future.

That said, it's nice to see them tackling the problem and I wish them the best. Obviously, bugs can be fixed given enough time and effort and in principle, I like their product.

5
mjewkes 2 hours ago 1 reply      
The C# documentation is quite clear:https://realm.io/docs/xamarin/latest/

 var puppies = realm.All<Dog>().Where(d => d.Age < 2);
The LINQ integration is appreciated.

 // Update and persist objects with a thread-safe transaction realm.Write(() => { realm.Add(new Dog { Name = "Rex", Age = 1 }); }); // Queries are updated in realtime puppies.Count(); // => 1
I'm assuming that "updated in realtime" means "realm.Write is a blocking operation", correct?

6
jonathan_mace 2 hours ago 0 replies      
Similar work from the research world: Diamond: Automating Data Management and Storage for Wide-Area, Reactive Applications

https://www.usenix.org/conference/osdi16/technical-sessions/...

7
tezza 3 hours ago 1 reply      
the text is very clear and easy to follow, however maybe a few concrete worked examples could help
20
Samsung chief staves off arrest, prosecutor keeps chasing reuters.com
66 points by richardboegli  6 hours ago   9 comments top 2
1
JumpCrisscross 6 hours ago 3 replies      
Does anyone know if there is a family or commercial connection between the judge who dismissed Lee's arrest warrant and the Lee family? It's terrible that my mind goes there first, but such is the state of Park's Korea.
2
r00fus 3 hours ago 1 reply      
Looks like the saga is very likely to continue and plague Samsung and Lee.

> "The only thing that has changed is that he won't be detained now," commented Park Jung-hoon, a fund manager at HDC Asset Management, adding that uncertainties were likely to linger.

Does anyone see similarities between this and the Note7 debacle? It seems that reality is just not friendly to Samsung right now.

21
Why I don't believe in Uber's Success benjamin-encz.de
208 points by Ben-G  5 hours ago   232 comments top 37
1
CoffeeDregs 4 hours ago 13 replies      
The commercial airline industry has lost money over the whole of its existence. If I'm flying from NYC to DAL, I'm flying the cheapest carrier (with a slight nod to Frequent Flyer programs).

I'm not sure how Uber, Lyft, etc are any different than Southwest, American, etc. Cab companies (which I hate) and airlines saw this years ago and promoted regulations to protect their fees (and, thereby, wages). Airlines lost those regulations under Reagan.

To wit, I had dinner with a few friends in SF and it was raining when we left: "I'll call an Uber and we can share. ewww... 250% surge pricing or $90. Lemme check Lyft. Sweet, Lyft is about $50. Our Lyft will be here in 3 minutes." There was zero friction switching from Uber to Lyft.

Winning in this market seems to require a Level 4+ autonomous car [1] monopoly. Level 4+ autonomous cars are not going to be here anytime soon and Uber's not going to have a monopoly. So it's going to continue to be a gnarly pricewar, made worse by Level 3 (in which the "driver"/pilot is a student doing his homework for $5/hour, taking over driving once or twice per hour).

Not sure I agree so much with the body of TFA but I certainly agree with its conclusion.

[1] http://www.techrepublic.com/article/autonomous-driving-level...

2
falloutx 5 hours ago 6 replies      
In my opinion, giving large scale subsidies to customers seems like a business anti-pattern, and is surely gonna kill a lot of startups in near future.

I have been sold to the Idea that Investors are killing any Internet "Business" by advising Founders against simple & straight-forward business plans. They constantly ask founders not to monetize earlier and wait until founders have no choice but to trash out the company/app to advertisers. The End Result is that founders get more and more scared of asking their customers for money.

If It was upto me, I would take 1000 paying customers over 100k free customers any day. Also this would mean free customers(subsidized customers in Uber's case) not hogging company's valueable resources and company could better serve lower number of paying customers.

Edit: I don't really know how to spell "customer".

3
Animats 4 hours ago 2 replies      
Here's a table of Uber's funding rounds.[1] What's keeping Uber going is Saudi Arabia's sovereign-wealth fund, which put in $3.5 billion last summer. They also took on $1.15 billion in debt last year. That's real debt at 5% interest, not some convertible deal.

Bloomberg says Uber is losing $800 million per quarter.[2] Unless they can find a bigger sucker than the Kingdom of Saudi Arabia, they run out of money in 2018.

[1] https://www.crunchbase.com/organization/uber/funding-rounds[2] https://www.bloomberg.com/news/articles/2016-12-20/uber-s-lo...

4
ahuja_s 5 hours ago 5 replies      
I live in Singapore and Uber is a little cheaper than the excellent cab services which operated here even before Uber. There is a big competitor to Uber here called Grab. I switched to Uber because their service is good, drivers are gracious and cars are nice and they give me good offers. Last year I took 300 Uber rides, never once taking the usual Comfort Delgro Cab because their drivers suck. If prices were the same, I would pick an Uber over the traditional cab over 90% of the time due to nicer drivers and service consistency across geographies. The rest of the 10% is when I urgently just need to hail a cab.

I don't think you meant to say that Uber won't be a success. You probably meant that you feel it's over valued. Uber is a success. People love it. They pay surge prices for it. I do it all the time. It MAY be over valued but it is already a success. wake up.

6
cody3222 3 hours ago 4 replies      
The network effects are pretty obvious actually: as the number of people on the platform increases, the amount of time a driver has to drive to pick you up becomes increasingly shorter, thus enabling the drivers to spend more of their time with a passenger in their car (which is when they are earning money). If drivers are making more money, Uber can pay them less (reduce the subsidy).

Old taxi cab companies can't compete because they have to drive much farther on average for each pickup

7
Keyframe 5 hours ago 4 replies      
I don't understand his point. Maybe I'm dense (today, ha).

If Uber has a (simplified) split cost per drive consisting of vehicle (+maintenance) and/or fuel & driver. If subsidised part is generally covering the driver part. When you replace driver with autonomous vehicle and you remove subsidies, you're left with a sustainable (presumably) model on a certain margin that is already rolling. Rolling in a sense that it is already an established business - people know it and use it. You've used subsidies (well, investors cash) to build a business.

Of course, this relies on a presumption they will build a sustainable model on replacing drivers with autonomous vehicles. It also presumes they will not venture into other, (potentially) more profitable business like logistics.

One thing is certain. They are positioning themselves for a great catch which relies on few key components working in the (near) future.

I think real hazard for Uber is regulation (autonomous vehicles for example) and market regulations (see taxi debates in Europe).

8
exelius 3 hours ago 4 replies      
I think Uber's business model relies on self-driving cars. They're paying subsidies now to build a brand -- the name "Uber" has become synonymous with "pull out your cell phone and call for a ride" -- that will be very valuable once self-driving taxis become a commodity (which will happen shortly after their introduction).

Uber is not a bet on who can build the most profitable taxi company now -- it's a bet on a brand in an industry that will rapidly commoditize. Given that literally everything else Uber does has been replicated by at least one team at every hackathon I've been to in the last decade, there is very little sustained advantage from technology.

The auto industry is at a crossroads: you have new upstarts like Tesla that are very obviously planning to convert to a transit-as-a-service model. The "old" auto industry (basically everyone that makes cars and is not Tesla) is still struggling to adapt to a more "continuous development" model like Tesla. Tesla's engineering process is far simpler -- an electric car replaces the complex internal combustion drivetrain (an engine block that requires separate air, water, oil and gasoline systems, plus the transmission) with a far simpler electric engine.

The electric system in a Tesla is actually far simpler than your average car: the sensor package in a modern internal combustion engine is an incredibly complex piece of engineering. This gives them a huge cost advantage over existing automakers -- if Tesla is providing transit as a service directly to customers AND making/maintaining the vehicles themselves, that displaces a lot of revenue (auto sales/maintenance to companies like Uber).

I think Uber will eventually merge with an auto manufacturer (and likely keep the Uber branding since it's likely to be the most valuable part of the company). They already have realized that Tesla is their biggest competition; and I think that the autonomous driving deal with Ford is simply testing the waters for a future acquisition.

15 years from now, most people will likely have 3 or 4 choices of how to get somewhere by car: Uber, Tesla, Lyft and likely a mix of local / regional companies. Brand value is powerful; and I guarantee you that at the end of this, the Uber brand will be worth more than what they've put into it.

9
msoad 5 hours ago 5 replies      
> Ubers growth is fueled by subsidies

A lot of people think if their Uber/Lyft ride is cheaper than their traditional taxi because it's subsidized. The lower fare for the most part is due to extreme efficiency difference between a taxi company and Uber/Lyft.

1. Uber/Lyft don't own the cars. They are leveraging car owners capital

2. Uber/Lyft drivers are more efficient because they don't have to roam around the city to find a passenger and they get notifications for when to work. The system scales up and down on demand. No taxi company that owns cars can do this.

3. Uber and Lyft are more convenient for the passenger and it makes people to use them more. I can definitely see myself and people around me to use Uber/Lyft way more than taxi since they came along.

Uber and similar companies are purring cash into this growth because at the end of the day they can make a profit because they are more efficient. And no, it's not easy to make a clone. The network effect is huge!

10
dchuk 4 hours ago 2 replies      
Uber's current problem is they have run out of things to innovate on. They've pretty much nailed the UX in their app, and all they're left with is cars taking people from Point A to Point B. What is left to improve on there?

So now, their only way to grow is to race to the bottom on price and undercut their competitors. And the only way to do that is to light billions of VC dollars on fire in the form of subsidized trips. That money isn't being invested in R&D or any form of innovation, just bridging the price gap between what the ride should cost (because of driver + vehicle costs) and what they are charging (which is a stupid low price most of the time).

The longshot they're taking on innovating by transitioning to self driving cars is downright reckless considering nearly all experts agree we're at a MINIMUM 5 years off from anything feasible in the real world, more likely 10+ years.

So they're going to have to raise their prices, or continue raising funds at an absurd rate (mind you, they've already raised $13,000,000,000 damn dollars). And they'll have to continue to light that VC money on fire in subsidized rides, rather than innovating on their product, because there's not really any other way to innovate on these rides.

As for the subsidies, I can't even understand why they are lowering their prices so aggressively anymore. It feels like each time I get into an Uber it's slightly cheaper. I was happy paying $25 for an uber to the airport rather than $30 for a cab, but now it's something like $14, which is great for my wallet, but I really don't even need it that cheap. It's bizarre.

11
shawndrost 3 hours ago 0 replies      
Why theorize about "economies of scale won't help"? We know the answer is "yes they will" -- Uber's US operations were profitable in 1Q16[1] and that included many non-scaled markets.

Like I said 8 years ago[2]: "Facebook made ~$200mm in 2008. It's pretty clear they could profit on those revenues, and instead are choosing to invest in further growth (with outside capital)."

[1] https://skift.com/2016/12/21/uber-isnt-profitable-in-the-u-s...[2] https://news.ycombinator.com/item?id=427212

12
iamcasen 3 hours ago 2 replies      
What OP, and everyone in this thread don't seem to understand is that Uber has waaaay more up it's sleeve than a mere taxi app.

Uber has more data about traffic patterns in every major city they operate in, than any other entity, including the cities themselves. Uber can use that data as leverage in so many ways.

They have an API, they are a logistics platform in a sense. Uber's endgame will be allowing people to plug into that platform for a price.

If they can manage to become the defacto cab platform for all major US cities alone, they are close to being worth their current valuation as is. Expand this all over the globe.

Do not forget they have a 20% stake in Didi as well now, which will more than make up for their 2 billion loss while trying to capture the chinese market.

13
spanktheuser 2 hours ago 0 replies      
Two possible ways Uber may expand the market, based on what I've seen in Chicago:

* Uber seems to do a much better job of serving non-wealthy & minority neighborhoods. I can't find an online source, but distinctly remember a local NPR report that claimed over 2MM Uber rides originated or concluded in an underserved neighborhood, vs. about 200,000 taxi rides.

* Anecdotally, Uber's superior experience is changing behavior. Despite having a thriving taxi industry, hailing a taxi is very unpredictable. If it's cold, raining, a busy night, too early, too late, or rush hour, you may be waiting a long time even in a well-served neighborhood. Uber practically guarantees a ride, meaning that I and my friends are far more likely to venture out. Overall demand for transport seems to have increased.

* Similarly, the reliability of ride sharing services permits many people to avoid car ownership all together. Zip + Uber/Lyft is a very compelling and affordable alternative to car payments, fuel, rented parking and insurance.

14
alexlatchford 5 hours ago 1 reply      
The potential for autonomous point-to-point freight shipping should pay off sooner than it's consumer business, allowing it to reduce it's investment requirements. Debatable on the timelines though.

You assume though that the network they've built up isn't valuable when you talk about self-driving vehicles. I agree the car tech will be commoditised and to my mind Uber owning their own fleet isn't the best option. It'd be very capital intensive and not a great use of cash.

I liken the switch to self driving cars to the same market as buy-to-let home rentals. If you've got the money why not buy a one (or more) of them, send them out and rent them through Uber/Lyft etc. and keep the money rolling in around the clock. Uber takes a smaller cut but also doesn't incur anything like as much risk.

15
AndrewKemendo 5 hours ago 0 replies      
How do you define a successful business? Is it solely based on return to investors? How long does it need to be in business to be successful? How does Uber define success?

I hear rumors that IBM is not successful anymore. Also same of GE and GM.

16
sytelus 1 hour ago 0 replies      
This is zero information article. In fact, zero insight article. Getting in to app-based cab business is hard because you need users so drivers come in and you need drivers so users come in. Its chicken and egg problem. The only known ways to break this is put up a huge hussle. Uber is just doing that and if its successful at it then it can virtually monopolize this segment and rip benefits years later like Amazon. Every one can make cab app, but not every one can get millions of users for their cab app.
17
kin 3 hours ago 1 reply      
Investors are giving Uber a near infinite bankroll so it's hard to say that they'll be allowed to fail.

I was in Southeast Asia recently and it's insane how cheap Uber is. It almost doesn't make sense to take any other form of transportation. I imagine once the competition dies, they'll have complete control over the market.

18
iaw 4 hours ago 0 replies      
Finally someone is saying it loudly. I've tried to make this point in the past to no avail. If Lyft can hold out they may survive over the long-term, but the strategies that allow Uber to be dominant will also prevent it from being profitable while maintaining market share.
19
mmagin 1 hour ago 0 replies      
As far as I can tell from how frequently Uber and Lyft give me discounts whenever I don't use them for a few days, they're burning crazy amounts of VC money just to grow.

If investment dries up before they get profitable, I think they're screwed.

20
pweissbrod 4 hours ago 0 replies      
It's an interesting topic but the financials are too black-box for us outsiders to form a strong opinion.

Yes the drivers are subsidized but the prices fluctuate based on supply throughout the day. On friday night in the city you can find uber prices surpass taxi drivers.The future of Uber has too many variables to have a strong stance on it's "success".

21
jaypaulynice 5 hours ago 0 replies      
Uber will be successful, but the problem is right now there is no barrier to entry in the market. They're pushing Lyft to spend more on marketing in order to get shares.

Uber has a huge chance of taking over public transportation and making it more effective. That's what I think they're gunning for...to privatize public transportation. How else will you get around town?

22
midnitewarrior 3 hours ago 0 replies      
The author forgot another significant problem Uber faces.

When self driving fleets can be deployed for ride sharing, a new startup, one without the hundred of millions / billions in losses that Uber will have accumulated, will come on the scene.

How can Uber, a company with massive losses to recover from, compete with a new, nimble and well-funded startup that doesn't have those legacy losses weighing down their ability to raise capital and pay back investors?

23
mcguire 4 hours ago 0 replies      
Tl;dr: Uber is using rule 1 of Monopoly 101: Subsidize your customers until you drive your competitors out of business. However, it does not appear they have a competitive advantage to prevent new competitors.

Edit: damn you, android keyboard!

24
jc_811 4 hours ago 0 replies      
I agree with this, however I don't agree with the author's take on the self driving car part. He says:

> Lets assume that we will see fully autonomous vehicles that can navigate city traffic in the near future [...] If this technology becomes available, I doubt that Uber will have a monopoly on self-driving car technology.[...]I think its safe to say that many companies will have access to self-driving car technology.[...]In this scenario I dont see how Uber can generate reasonable profits

In the ideal 'future' society, everyone will have a self-driving car they can order to pick them up wherever they are. This would drive Uber out of business. However there will be a whole taxi industry for performing this service when someone is outside of their own city.

I believe the taxis of the future will be there to assist someone in one of two scenarios:

1.) Someone in their own city who doesn't own a self driving car

2.) Someone who is in a city different than their own

I still think there will be plenty of business in the above scenarios - and Uber is positioning itself to be the industry leader/titan. It's definitely a huge gamble since predicting the future is impossible at worst, and extremely hard at best; but we'll see if Uber can stay afloat long enough to reach it.

25
jupp0r 2 hours ago 0 replies      
The author makes good points, but his understanding of Ubers vision is very limited. It's not about drivers or passengers, it's an API to get stuff from A to B, be it people or goods.

Also, describing self driving cars in city traffic as "unlikely" really misses the point that self driving cars are driving in city traffic right now and have been doing that for years. They are too expensive to be commercially viable, but that's definitely going to change in the near future.

26
dmitrygr 5 hours ago 2 replies      
Possibility: seeing all this, Uber looks hard at what they do have and simply becomes contract cab dispatcher for all cabs everywhere, smartly choosing which cab to call to the scene of the caller using their algorithms. And lets existing can companies so the actual driving.

Why? Currently some cab companies have shitty apps, others have none. Your chances of getting a cab in a suburb of an unfamiliar city is zero. Uber can fix that.

27
mayerzahid 3 hours ago 0 replies      
A large assumption I keep on seeing is because Uber will lead with self driving cars people will just stop purchasing cars and use just self driving ride sharing. I think that a portion of people will but many will still buy their own self driving cars for multiple reasons. Time saved, comfort and control of passenger experiences, and cash generating vehicle that can be plugged into self driving network while it is not being used.
28
maverick_iceman 5 hours ago 2 replies      
People said similar things about Facebook/Instagram. Or even the internet as a whole (Paul Krugman).
29
zby 4 hours ago 0 replies      
Why everyone assumes that Uber could be allowed to become a monopoly? There are anti-monopoly laws. I understand that so far Uber was allowed to break lots of laws - but this cannot go on indefinitely.
30
return0 1 hour ago 0 replies      
For some reason Uber reminds me of the rental car business. Yeah it appears to be hard to build a monopoly there.
31
sakoht 5 hours ago 0 replies      
To have a moat, they just need sufficient software/human network/infrastructure/legal infrastructure that competing requires too much investment. All of these things are a PITA, and not immediately replicable by fresh local competitors, even without the self-driving cars.

The self-driving cars and consistent platform could seal the deal against the small players entering.

That's not to say their valuation is correct, or anything. :)

32
konschubert 5 hours ago 3 replies      
If I could bet against Uber, I would do it
33
abalone 3 hours ago 1 reply      
tl;dr He thinks UberPool is a viable business with network effects, but Uber's valuation is too high.

He didn't really support that well though. If everybody uses UberPool because the economy of scale tied to the network size is unbeatable, that's a lot of riders. It doesn't seem impossible on its face that owning the future replacement for public transport for like half the world wouldn't justify a high valuation.

34
leecarraher 3 hours ago 0 replies      
uber offers a better product than taxi services do. So if uber raised their rates to be comparable to a taxi services, i would still use uber because its a better experience. And because i learned that because they incentive it with overall lower rates.
35
aashaykumar92 5 hours ago 1 reply      
Depends on how you define success. Imo, Uber will become financially successful when autonomous cars and trucks work well and are regulated.
36
cryptozeus 5 hours ago 5 replies      
Amzn and tsla started reporting profits just recently. Would you call them failure?
37
ry4n413 5 hours ago 1 reply      
Any word on IPO timeline?
22
Most Winning A/B Test Results Are Illusory [pdf] qubit.com
61 points by maverick_iceman  5 hours ago   27 comments top 5
1
ted_dunning 16 minutes ago 0 replies      
This is yet another article that ignores the fact that there is a MUCH better approach to this problem.

Thompson sampling avoids the problems of multiple testing, power, early stopping and so on by starting with a proper Bayesian approach. The idea is that the question we want to answer is more "Which alternative is nearly as good as the best with pretty high probability?". This is very different from the question being answered by a classical test of significance. Moreover, it would be good if we could answer the question partially by decreasing the number of times we sample options that are clearly worse than the best. What we want to solve is the multi-armed bandit problem, not the retrospective analysis of experimental results problem.

The really good news is that Thompson sampling is both much simpler than hypothesis testing can be done in far more complex situations. It is known to be an asymptotically optimal solution to the multi-armed bandit and often takes only a few lines of very simple code to implement.

See http://tdunning.blogspot.com/2012/02/bayesian-bandits.html for an essay and see https://github.com/tdunning/bandit-ranking for an example applied to ranking.

2
godDLL 2 hours ago 1 reply      
This is very well explained, even if you don't understand statistics. Apparently not many vendors of A/B testing software do.
3
jkuria 1 hour ago 4 replies      
Hmmh, this is interesting. Most A/B software will let you set a level of statistical confidence that needs to be attained before a winner can be declared. For example in Google Analytics two common ones are 95% and 99%. We stop our tests when they reach at least 95% confidence. Is the author saying one must wait for 6000 events even if the difference between A/B is large? The larger the relative difference, the fewer events needed.
4
Vinnl 1 hour ago 1 reply      
Nice article. One question though:

> Perform a second validation test repeating your original test to check that the effect is real

Isn't this just the same as taking a larger sample size?

5
tedsanders 2 hours ago 5 replies      
I think the entire approach discussed in this pdf is flawed. (Edit: not saying PDF itself is flawed or wrong, just the hypothesis testing approach to A/B testing.)

The right question to ask is: What is the difference between A and B, and what is our uncertainty on that estimate?

The wrong question to ask is: Is A different/better than B, given some confidence threshold?

The reason this is the wrong question is that it's unnecessarily binary. It is a non-linear transformation of information that undervalues confidence away from the arbitrary threshold and overvalues confidence right at the arbitrary threshold.

A test with only 10 or 100 samples still gives you information. It gives you weak information, sure, but information nonetheless. If you approach the problem from a continuous perspective (asking how big the difference is), you can straightforwardly use the information. But if you approach the problem from a binary hypothesis-testing perspective (asking is there a difference), you'll be throwing away lots of weak information when it could be proving real (yet uncertain) value.

Once you switch away from the binary hypothesis-testing framework, you no longer have to worry about silly issues like stopping too early or false positives or false negatives. You simply have a distribution of probabilities over possible effect sizes.

23
Sidehelm: a pipeline to validate, test, and pull CSV data sidehelm.com
34 points by iamwil  4 hours ago   14 comments top 7
1
grenoire 3 hours ago 1 reply      
How trustworthy is Sidehelm in handling of the data? Do they ensure that the pipelined data does not come into contact with human eyes, or is not copied?

I cannot really think of many services off the top of my head that would be so willing to give significant chunks of data to another pipelining service.

2
michalskop 2 hours ago 0 replies      
Related for working with CSV data, there is an interesting standardization project by Open Knowledge: http://frictionlessdata.io/guides/tabular-data-package/

It already includes some tools for working with and displaying CSV data.

3
iforgotmypass 3 hours ago 2 replies      
Dear HN users,

Could you please link to alternative solutions?

Does AWS Glue solve the same problems?https://aws.amazon.com/glue/

4
triplenineteen 2 hours ago 1 reply      
Does this support unicode characters? Other encodings?
5
jimktrains2 1 hour ago 0 replies      
non sequitur: I find it interesting that they're using the Google cloud blue hex style in their icons, but don't seem to support BigQuery or BigTable/HBase.
6
automatwon 2 hours ago 1 reply      
The 'Parse', 'Transform', 'Validate' icons look alot like Google Cloud Platform Icons.

https://docs.google.com/presentation/d/1vjm5YdmOH5LrubFhHf1v...

7
TAForObvReasons 2 hours ago 0 replies      
Is there a functional demo or video demonstration? All I see is a marketing page ...
24
Turbo Pascal Compiler (2013) teamten.com
216 points by bootload  13 hours ago   77 comments top 25
1
oso2k 5 hours ago 1 reply      
I'm surprised that no one here seems to have noticed that this is a Tubro Pascal compatible compiler written in JS that writes binaries compatible with 1978 UCSD p-System p-code, and, a p-code VM also written in JS that will run in a web page displaying x86 PC/DOS graphics. No less than 4 architectures to juggle (PC/CGA/EGA/VGA, JS, Web/DOM, p-code).

Early 8 & 16-bit architecture BASICs and Pascals worked on by several people all strived for p-code compatibility [0][1] and this guy wrote this by himself. I thought that was most remarkable when I found this a couple years ago.

[0] https://en.wikipedia.org/wiki/P-code_machine

[1] https://en.wikipedia.org/wiki/UCSD_Pascal

2
cyberferret 11 hours ago 7 replies      
The first ever computer program I wrote for $$$ was a retail point of sale system for a local pharmacy. I used Turbo Pascal for it, complete with my own proprietary database files etc.!

Later on, I wrote a front end menu system (which simply gave a customisable easy to select option of applications to run in DOS) that ended up being used on thousands of PC's throughout businesses and almost every government department in my town.

Good Days. I downloaded the original Turbo Pascal v 1.x a couple of years back, and intended to try re-writing that point of sale system again to see how much I still remembered > 30 years later. Never got around to it, but I will make a concerted effort this year to try it out. I may just try it on this web emulator, seeing as I am now on a Mac!

Thank you Anders Heljsberg! And to these guys for building a web emulator of the original compiler.

3
linker3000 11 hours ago 2 replies      
I too found some old floppies a few months back and one contained the Turbo Pascal source code for a map editor I wrote for the original Rockford game on the PC.

I fired up Turbo Pascal in a DOS box on Windows 10 and got the code to compile. Sadly, I don't have the game any more and the 'Rockford' available online is actually a remake / emulation using a different game engine and the maps are in a different format.

Code here for anyone who wants to laugh at my skills as a non-programmer:

https://github.com/linker3000/Historic-code-PC-Pascal-and-AS...

PS: Does anyone here have a copy of the original Rockford game!!??

4
equalarrow 6 hours ago 0 replies      
One day, when I was still in high school, my dad brought home a DEC mini (dunno which one). It had 2 8" floppy drives and an actualy terminal that would sit on top. When you would turn it on, it had these 2 huge fans in the back that sounded like an airliner taking off. It was still in its 8u rack chassis.

When I turned it on for the first time, it booted off of what must have been some kind of rom and dropped straight into UCSD Pascal. I was programming at home at the time via Basic & 6502 assembly, and when I saw the Pascal prompt, I was like, awesome! (I had a class at school using Turob Pascal)

I spent a lot of weekends in our cold garage (over the winter) hacking on that machine. I wish I would have saved some pictures. I made a star trek type game and (my fav at the time) a database ala dBase.

Good times..

(Mobile typo edits)

5
lumberjack 10 hours ago 2 replies      
Turbo Pascal was my introduction to programming. Using just the CRT library (Graph if you wanted to be fancy was also easy to use) and learning about loops and procedures was enough to be able to write simple Nintendo 64 style games. I still don't think any programming languages today match that kind of mild learning curve.
6
lobster_johnson 6 hours ago 2 replies      
Back in 1997 I was a developer on a commercial Windows game, written in Turbo Pascal (or technically, Borland Pascal). Not sure how many games have been developed with Turbo; can't be that many.

We had to write our own bindings for DirectDraw and DirectSound, since there were no C header files we could use directly. (This was before 3D acceleration, so no Direct3D, which wouldn't really have been feasible.) We were all Pascal programmers at the time, and didn't even consider using C or C++.

Turbo Pascal was really ideal for writing games in. Short develop-compile-debug cycle, great native performance, support for inline assembly (our image code had lots of this), and easy calling into C libs.

I was using Borland's Pascal tools as late as 1999-2000, the last iteration being Delphi 4.0. While I did plenty of GUI stuff, my biggest project was a non-GUI teleconferencing solution that consisted of a web application that orchestrated calls using several distributed backends (or microservices as we would call them today), with RPC using Microsoft DCOM; Delphi had very good COM support. As part of this app, I had to talk to several low-level telecomms boards by Dialogic, which of course only had C headers. I wrote an AST-based C-to-Pascal translator so I didn't have to do all the headers manually, and I was able to use it to translate things like Microsoft's MAPI headers, which were COM.

Still... It's amazing to think today that I was so fond of/productive in Pascal that I shoehorned everything into it, even those headless server apps, when obviously C or C++ would have provided much less friction.

The answer is of course that TP/BP/Delphi all provided an amazingly productive experience. These days I use Go a lot, which of course is heavily influenced by the Wirth family of languages. Go today feels a lot like Borland Pascal with garbage collection.

</nostalgia>

7
pawadu 12 hours ago 3 replies      
The original compiler can be downloaded from Borland:

http://edn.embarcadero.com/museum/

IIRC someone has posted the complete source code online but I cant find it right now...

edit: http://turbopascal.org/turbo-pascal-download

8
phkahler 8 hours ago 0 replies      
This made me smile. Not because I care at all about TP3 or even TP6 which I did a lot with. No, it's because the effect it had on the author. For him it was a project that made programming fun again for personal reasons. In practical terms it's worthless, in personal terms it was priceless ;-)

Love that.

9
johnhattan 8 hours ago 0 replies      
Having done TRS-80 BASIC for a couple of years in high school, Turbo Pascal was a game-changer. I had a TinyPascal compiler for TRS-80, but Turbo Pascal converted me the moment I saw someone switch from editing to compiling without actually leaving the program.

I spent the summer saving up for my own 286 machine just so I could use it.

10
alyandon 2 hours ago 0 replies      
Turbo Pascal was my first experience in structured programming languages (and a slick IDE to boot!).

The affordability of all the Turbo branded language products (and the excellent printed documentation that came with them) that were made available to me as child by my parents are on the primary reasons I have such a love for technology and why I am in software development today.

It's really a shame that modern programming platforms don't capture some of that same ease of use for experimentation. :(

11
clouddrover 11 hours ago 1 reply      
I think the nicest way to achieve "Pascal on the web" will be for Free Pascal to implement WebAssembly support. Free Pascal can compile to the JVM (http://wiki.freepascal.org/FPC_JVM), not sure what their WebAssembly plans are.
12
pilif 7 hours ago 0 replies      
I didn't start with Turbo Pascal as when that was in wide use my parents still didn't allow me to go near a computer.

That didn't however stop Pascal from starting my career though as I started programming for real a few years later with Delphi 2 which was running a slightly improved version of Pascal with object oriented additions.

How I loved that programming environment: As quick and easy to use as VB, but able to produce real native binaries that run without any (external) runtime environment.

Plus you got all the windows SDK C headers pre-translated to Pascal so the whole windows API was ready at your fingertips (what could possibly go wrong when a self-thought teenager gets to write native code with complete unprotected access to memory and threading?).

Delphi is what I've used for my first commercial project too and Delphi is what I still use these days when I have to do some very, very rare Windows work).

The language is phantastic. Even after years of not looking at my code, it is very readable to me and I get back into productive mode very quickly.

Of course this might all just be nostalgia talking.

13
Philipp__ 8 hours ago 1 reply      
What strikes me about (Turbo) Pascal is how good it is for introductory programming! Just think about it, I find it to be like mixture of BASIC-like structure with Pyhton-esque syntax.

Language in itself is very clean, and the way how static it is and how precise you need to be with declaring of variables is great for preparing you for C down the road. Then again you can do many things with it.

I remember writing my first serious apps in Pascal, (few hundreds LOC), it was basically CRUD app, but it talked to .txt and .bin files. I learned a lot about memory management, and it made me implement and really understand deeply many concepts like linked lists, sorting algorithms and work with strings (writing small parsers). I am very grateful for Pascal, and I think it changed me forever, in a way where I got in love with it and I got hang of lower level programming pretty early. After that when I went to C, I had to learn a lot but it felt so natural. It's funny that I find myself struggling with Python and JS (with JS I got better, Lisp and functional programming came in handy there), cause I just can't get used to the language giving you pretty much everything just by calling one simple method/function. Numbers of times I found myself writing function for something trivial that already exists in language by itself. Anyway, it felt great to think about old times and Pascal. I think it is very underrated as a learning language.

15
rbanffy 10 hours ago 2 replies      
How is it possible to not find multiple free (beer or speech) Pascal compilers that can deal with Turbo Pascal code after 30 seconds of googling?
16
BugsJustFindMe 10 hours ago 0 replies      
My favorite Pascal compiler story is about the development of G-Pascal for the Apple II and then C64 at http://www.supercoders.com.au/blog/nickgammongpascal.shtml
17
antirez 1 hour ago 0 replies      
Make sure to check the other projects from the same guy. Lovely things.
18
ilaksh 6 hours ago 0 replies      
If you like Pascal check out http://nim-lang.org
19
davb 9 hours ago 0 replies      
> This compiler is the only project I've ever worked on, in my life, which I enjoyed every bit of.

I wish I could find a project that made me feel like that.

20
k-mcgrady 11 hours ago 0 replies      
Turbo Pascal was the first language I properly learnt after dabbling in VB a little. This was about 12 years ago. I was in school at the time and did a short work placement at Borland (developers of TP) where people found it hilarious that I was learning TP. It's still one of the most enjoyable languages I've written in but maybe that was just because of the challenge of everything being new to me.
21
open-source-ux 10 hours ago 3 replies      
I really miss fast, compiled programming languages like Turbo Pascal in the web space.

How many languages can you identify for web development that match the following features:

- Fast

- Compiled

- Small, single file executables

- Low-memory consumption

- Readable syntax that isn't afraid of being a bit verbose

- A small language vocabulary you can actually learn rather than the labyrinthine language definitions of today

I always liked Niklaus Wirth's philosophy on programming language design. I wish more programming language designers would follow it.

22
mzs 6 hours ago 0 replies      
D to list files

W to load file

R to run

23
Shivetya 10 hours ago 1 reply      
Turbo Pascal sparked my true interest in programming. Prior to that I experimented with BASIC but never found it appealing though I did get to play with it around with some people who could truly work magic with the language.

Dabbled in Modula-2 (stonybrook) and Turbo C. However I never really got into Delphi as I wasn't that interested in OOP or Windows. Worse Borland did their best to break my interest with the near constant upgrades that required buying the product all over again and worse in bundles that exaggerated the price.

Turbo Pascal or similar would be a cool way to write web pages if it could be extended that way without getting silly complex

24
32wattle_park 11 hours ago 0 replies      
omg, remind me of uni days...I liked turbo pascal.
25
JackFr 8 hours ago 0 replies      
That looks nothing like a rose.
25
Silicon Valleys Ultimate Exit (2013) genius.com
45 points by nkurz  4 hours ago   20 comments top 7
1
etjossem 1 hour ago 1 reply      
> Build an opt-in society, ultimately outside the US, run by technology.

Balaji Srinivasan is being considered to run the FDA. If chosen, he is expected to radically weaken its regulatory power, purportedly in the interests of promoting innovation in the pharma space. He's described it as the big enemy for any biotech innovator to overcome.

When we have an opt-in healthcare system, people who are in perfectly good health will opt into favorable plans only they qualify for. Those born with expensive conditions or who develop a chronic illness won't be able to opt in, and will find themselves in a high risk pool paying hardship-tier premiums.

When we have pharmaceutical companies opting into less stringent regulatory structures, even the most dangerous drugs will make it to clinical trials - to be tested by people who lack the ability to choose any other way of making a living. Pharma manufacturers will then base their prices on what they believe the highest tier of wealth will bear, ignoring the long tail of underinsured Americans.

The problem with an opt-in society is that only some people get the privilege of opting in. It's an almost self-contradicting concept. Societies only hold together if their members don't favor exit. Members collectively recognize they're all on the same side, by way of the social contract they've entered into, and they know the rules need to be the same for everyone. A modicum of loyalty - in Hirschman's sense - is an absolute prerequisite for a group to be called a society.

When it comes to our health, let's show some loyalty to our fellow Americans and use our voices, instead of fleeing the inconveniently poor and sick. The veil of ignorance hasn't come off yet, after all: if you get cancer tomorrow, you might well become one of them.

2
stephancoral 1 hour ago 1 reply      
Why are SV "big shots" so obsessed with making their own little la-la land fiefdoms (Who actually wants AI/tech to run their country? Like, that is literally an apocalyptic nightmare of mine) instead of contributing positively to the country that helped foster their fortunes?

This is some of the most myopic, ego-stroking bullshit I think I've ever read on this site.

Also the sooner ridiculous "code=law" analogy goes away the better. It's clear this person has very little understanding how affairs of state actually work.

3
nkurz 3 hours ago 1 reply      
I post this now because the author (Balaji Srinivasan) has recently been named as a contender for heading the FDA: https://www.statnews.com/2017/01/12/fda-trump-oneil-srinivas...

Note that "exit" here is not synonymous with "secession", but instead refers to Albert Hirschman's contrast of "exit" and "voice" as two complementary approaches that individuals can use to influence the society around them: http://peterlevine.ws/?p=11887.

In this context, the option of "voice" includes the approach of reforming institutions from within, while "exit" leaves them broken and bypasses them with new alternatives. Given Srinivasan's embrace of "exit", this makes him an intriguing candidate for overhauling a massive federal bureaucracy.

4
bbctol 1 hour ago 2 replies      
One thing a lot of people don't realize about the modern concept of the "social contract" is that it depends on the capacity to exit, which requires the existence of the frontier. John Locke's idea of government wasn't just influential to the development of the United States, it was openly influenced by the existence of America; he wrote a lot about how the "state of nature" that exists independent of government was the American frontier (he didn't care much for/about Native Americans.) Now, the idea that a government is a social contract between citizens and rulers is universal, but we've eliminated its original foundation: an empty space citizens can settle when they decide their government is worse than nothing.

There's no easy solution to this problem. We're out of unclaimed land (seasteading and Mars colonization wouldn't really resolve it, even if they were more plausible.) I don't think Srinivasan has a great concept of why nations work; the problem with a Silicon Valley exit is less that Silicon Valley doesn't have aircraft carriers, and more that it doesn't have, you know, farmland, and I don't think the opt-in society would be allowed: Silicon Valley is rich and prosperous because of the large pool of mainly American consumers it sells to, and it is protected by the auspices of the US government. If they really wanted to leave, they wouldn't be able to take the money-making with them, as the US has no incentive to protect the businesses of a foreign power. I actually think that instead of getting more and more small countries as time goes on, we'll get increasing large blocs caring for things like defense and very general welfare, and increasingly small, atomized communities, either geographically or online, caring for other needs. If we have a large United States government that takes care of the nuclear weapons stockpile, coordinates response to natural disasters, and makes sure people don't die for arbitrary reasons, smaller communities can tax, spend, negotiate as they will. And I expect these communities will increasingly go online; online, Locke's dream can be realized, as there really is infinite space to colonize (if you don't like your current forum, go found a new one.) As long as people have a basic level of security, they'll be free to self-organize and explore new models of society in a virtual world. I can't tell if this vision of the future is utopian or dystopian.

5
austenallred 1 hour ago 2 replies      
Does anybody have a link to Balaji's MOOC? I've heard great reviews about it, and I have an enormous amount of respect for Balaji, but I haven't been able to find it - I can only find his MOOC about Bitcoin. Or is that the MOOC he was referring to?
6
DannyB2 2 hours ago 1 reply      
If the USA is the Microsoft of nations, as the article says.

Then would some current or future president be the Ballmer of the US?

7
tzakrajs 57 minutes ago 0 replies      
The governments will sooner outlaw job automation or implement basic income than allow chaos to take hold.
26
Dont Tell Your Friends Theyre Lucky nautil.us
180 points by dnetesn  5 hours ago   179 comments top 30
1
colanderman 4 hours ago 4 replies      
The article (and some comments here) seem to conflate luck and what I will call lot. "Luck" I define as random happenstance during one's life. You can manage luck. Doing so is the central theme of many board games. You can increase your luck "surface area" by taking more chances. Entire industries (e.g. insurance) exist to manage luck.

Your "lot", on the other hand, I define as what you were born with. How you were raised, where you grew up, what kind of education you got -- everything you can't control that does have a significant impact on your life's outcomes. You can work to improve your lot, or minimize its impact on your life, but it's very difficult.

Of course there's some correlation: those with a good lot often learn early how to manage luck, and those who manage luck well can negate a poor lot.

Hence I begrudge no-one with seemingly good "luck": often (possibly more than not), their fortune is simply a byproduct of how they managed their luck. Good for them!

But those born into a good lot? They're the true "lucky" ones.

2
ergothus 4 hours ago 11 replies      
My father and I have somewhat productive political conversations: He's fiscally conservative, I tend towards the liberal side of the scale.

Drilling in to find what we really disagree about, it seems to boil down to two concepts: (1) I view success as a matter of luck that your effort can make better or worse. He views effort as the single most important deciding factor in success in life (2) I'm willing to tolerate an amount of "unfairness" in people getting help they "don't deserve", while he finds this very offensive.

I honestly feel that if considered luck to be a larger factor and effort to be a lesser factor, his political stances would change pretty dramatically. (same applies to me in reverse). I wonder how much the social willingness to accept luck as a factor impacts popular political positions. (Perhaps not much, as the author in the article promotes a consumption tax, which is generally seen as more regressive)

3
dv_dt 5 hours ago 3 replies      
I think this touches upon one of the biggest weaknesses of the current economic system. We systematically waste the human capability of millions of people because the system essentially randomly gives much better opportunity to some over others. Meritocracy somewhat exists but mostly to the extent that people can maximize the opportunity they've drawn as their lot in life.

I like the idea of Basic Income, but it's a somewhat limited solution to capping how far down someone can fall in society - what would really supercharge a future economy is opening up avenues to truly distributing equal opportunity. Wealth inequality suppresses this strongly, when people receive better margin of income over the absolute minimum economic allocation of their wages, they can then allocate their own wealth from their personal outlook in multiple ways - including starting businesses which may change the world.

4
kyleschiller 5 hours ago 3 replies      
Debating the actual importance of luck seems a lot less important than developing the proper attitude towards luck.

Pretending luck doesn't exist can lead to arrogance and a lack of empty for people who haven't succeeded. On the other hand, believing that luck controls everything can lead to fatalism.

It might seem best to find a happy medium, but being wishy washy about this whole thing just gives you opportunities to blame your own failure on circumstances outside your control, while continuing to take credit for success. In the general case, looking for balance between opposing ideologies makes no guarantee that you'll walk away with the best parts of both instead of the worst.

In practice, it's probably best to drop the determinism/indeterminism dichotomy completely and just focus directly on the desired end attitudes.

On a side note, the reason American society is obsessed with meritocracy has nothing to do with a belief about the nature of luck. Denying luck as the path to success is just a way to make people work harder.

5
phkahler 5 hours ago 4 replies      
Progressive consumption tax is ridiculous. It requires your tax rate at the point of sale to be dependent on all your purchases to that point in time. That's just not practical. Or it may require every purchase you make to be recorded for tax-time when you then pay the taxes. Either way it requires the government to know every purchase you make, or at least the price. This is not something anyone should want.
6
downandout 4 hours ago 2 replies      
It's certainly true that you need to be very lucky to become a billionaire - generating wealth at that level usually involves tremendous numbers of other people loving whatever business you have decided to create. But if you're reasonably intelligent, at least in the US, it's quite possible to become a millionaire without much luck, through decades of hard work and discipline.

Examples: software engineers at large companies that stick around for decades (usually through options), doctors (at least specialists, such as cardiologists and anesthesiologists), and lawyers that go to the best schools and are able to land jobs at top-flight firms. Even tradesmen that stick to their craft, such as master electricians or plumbers, can quite reasonably expect to achieve millionaire status over the course of their lifetime assuming that they manage their money well.

So yes, luck plays a huge role in the creation of enormous sums of wealth. But if you live in a country with abundant economic opportunity such as the US, there's no reason to be poor unless you have been extremely unlucky (health problems, accidents, etc have befallen you), you are unwilling to work, or you've made extremely poor life/financial decisions.

7
chrishacken 4 hours ago 1 reply      
Maybe I'm naive, but I don't think any one denies the role luck plays in one's success or not. However, to completely discard effort and determination is selling everyone short. I'm running a successful company partially because of "luck", I happened to start it at the perfect time, but also because I pour every ounce of money and time I have into it. My nights and weekends don't exist. Some people aren't willing to put in the time to turn luck into success.

Telling people that success is just a matter of luck will only reinforce the thoughts of unsuccessful people to believe they're "unlucky". You are able to make your own luck to an extent.

8
jartelt 4 hours ago 0 replies      
I think a lot of people do not realize that you are lucky if you are born into a middle class or upper class family. Having parents with some savings allows you to take extra career risks because you know that you can likely get help from your parents if none of the risks pay off. It is more difficult to make the decision to work at a startup or buy a house if you are totally on your own when things go south.
9
cmurf 4 hours ago 0 replies      
Veil of ignorance. There's a significant part of the upper end (wealth wise) of the population that like our classist society just the way it is, or maybe that it should be more classist. Everything should be a rent, there should be no public lands, everything is to be exploited, and if you're on the short end of the stick it's merely unfair, not a wrong or a failure of society. Or the more extreme versions of this, higher class folk have better money, better ideas, better genes, make and sell better things. They are better than others. Democracy and socialism are threats to these notions.
10
jeffdavis 5 hours ago 3 replies      
Just like when people are trying to sell you something, they call it an "investment"; people trying to implement government spending programs call it "spreading opportunity".

Some government programs really do spread opportunity, but that requires close examination and criticism; I don't just buy into it because a politician calls it opportunity. Is college an opportunity? It can be a huge opportunity to get ahead in life; but it can also just subsidize a partying lifestyle and a phony major for four years. It depends on the college, the student, and the structure of the opportunity.

It's hard to tell the difference between spreading opportunity and spreading results. It often requires looking at the details, measuring along the way, and it is often different for different people.

11
ChuckMcM 2 hours ago 0 replies      
If you get a chance to experience an "exit", where a number of people suddenly have much more wealth than others around them who are essentially doing the same things but joined the company at a different time, you will get to see all the different ways that people internalize that event (both positively and negatively).

Luck is very much a part of success and a big part of the way that Vikings talked of sailing with successful leaders ('they have a lot of luck'). And most importantly luck has no bearing character. But internalizing that can be hard when someone you despise gets rich, or someone you really care about fails to get the rewards that others in the same place have.

12
jrs235 42 minutes ago 0 replies      
I believe https://news.ycombinator.com/item?id=13437977 ties in with this in that many "lucky" people prepared so that when luck struck things were aligned to take off.
13
emodendroket 1 hour ago 0 replies      
This seems to take a sudden leap from a relatively uncontroversial (I'd think) proposition into a political program. I wonder about this bit:

> The price of the average American wedding in 1980 was $10,000. In 2014, the most recent figure I had, was $31,000.

According to a random inflation calculator I checked online $10k in 1980 would be worth almost $30k today. https://data.bls.gov/cgi-bin/cpicalc.pl?cost1=10000&year1=19...

14
charles-salvia 4 hours ago 0 replies      
In the United States, at least, poverty tends to be concentrated geographically in inner-cities and rural areas instead of being evenly spread out. This would seem to indicate fairly conclusively that location and environment affect opportunity and wealth more so than an individual willingness to work hard. In fact, being born into an environment of concentrated poverty like this molds your mental state and perception of the world, to the extent that the idea of breaking out of poverty may not always even appear as a possibility, thus discouraging you from even believing that hard work might pay off.
15
baldfat 5 hours ago 0 replies      
I am anti-determinist and Soren Kierkegaard (founder of existentialist thought) so inspired me that I named my son Soren. The fight between the two parties of thought is huge and bigger then Windows vs OS X.

> Jean-Paul Sartre:

"What is meant here by saying that existence precedes essence? It means that, first of all, man turns up, appears on the scene, and, only afterwards, defines himself. If man, as the existentialist conceives him, is indefinable, it is because at first he is nothing. Only afterward will he be something, and he himself will have made what he will be."

Society sees luck in terms of fairness. This article used the word fair or fairness zero times. Fairness is a HUGE issue in deterministic thought especially dealing with how we perceive others around us.

16
tabeth 5 hours ago 4 replies      
I'm a strong determinist. Effort, hard work and skill is irrelevant (any relevance comes from the fact that you're already in your statistical band for expected success and are trying to maximize within that). I believe most of your success is determined before you even take one step on this planet. Step one is acknowledging the truth: your initial circumstances dictate your future. Once this is acknowledged, we as a species can begin focusing on making the initial conditions ideal for everyone.

Note: I am not saying you shouldn't work hard. I am just saying that it's not doing as much as you think. Individual examples of success (I've done decently despite two parents who didn't finish elementary school, live in inner city, etc) are not of relevance for planning the future of the human race. The world is chaotic, so there will be outliers in spite of the "determinist property" of the world.

Parents' own desperation to "set their children up" for success is anecdotal confirmation of this fact.

---

Some examples:

Socioeconomic status v. Educationhttp://www.apa.org/pi/ses/resources/publications/education.a...

Health v. Educationhttp://www.nber.org/digest/mar07/w12352.html

Health v. Socioeconomic Statushttp://www.apa.org/pi/ses/resources/publications/work-stress...

Parent education v. child long term successhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC2853053/

Skin color v. attractivenesshttp://journals.sagepub.com/doi/abs/10.1177/0095798405278341

Height v. successhttp://www.independent.co.uk/life-style/health-and-families/...

Weight (at birth) v. successhttp://ns.umich.edu/new/releases/5882

Attractiveness v. successhttps://www.psychologytoday.com/blog/games-primates-play/201...

Gender v. successhttps://www.historians.org/publications-and-directories/pers...

Eye color v. alcoholismhttp://www.sciencedirect.com/science/article/pii/S0191886900...

Geography v. socioeconomic successhttp://www.cid.harvard.edu/archive/andes/documents/bgpapers/...

17
nisse72 2 hours ago 1 reply      
Tangentally related, I find it interesting that we often call people lucky when something very bad happened to them, but they somehow managed to survive the situation or land on their feet. We aren't as keen to describe people as lucky who avoided danger entirely.

Lone survivor in a plane crash? Lucky. Took a cruise instead? Meh.

Personally I think it's preferable to not be in the crash, than to have survived it.

18
rbanffy 28 minutes ago 0 replies      
Why would I? They live in this timeline.
19
slitaz 5 hours ago 1 reply      
"luck" is not a good choice as a word here.They mean something like a chaotic event that ended up being positive to them.

Also, just waiting for such a positive chaotic event to happen to you, is probably not the best strategy.

If you make good social interactions that you maintain, then those positive chaotic events are more likely to come your way.

20
jrs235 4 hours ago 0 replies      
"It takes 10 years to achieve overnight success."

http://www.inc.com/empact/why-successful-people-take-10-year...

21
aresant 5 hours ago 1 reply      
Ben Franklin has a great line on this topic - "Diligence is the mother of good luck."

The author illustrates this major point with an example of the "TOP" cellist in the world:

"One [cellist] earns eight or nine figures a year while the cellist who is almost as good is teaching music lessons to third graders in New Jersey somewhere. . . The person who is eventually successful got there by defeating thousands, maybe tens of thousands, of rivals in competitions that started at an early age. . . [but] the luckiest one . . [is] that person who is going to win the contest most of the time."

EG - you need to put in the hours of preparation & subject yourself to competition of the highest order to even have a chance at being the "luckiest" in your field.

22
dang 4 hours ago 0 replies      
This topic always reminds me of a line of pg's from years ago: https://news.ycombinator.com/item?id=1621768.
23
djyaz1200 5 hours ago 1 reply      
There is a pretty good book that addresses some of the business aspects of this... "Competing Against Luck" by David Duncan and Clayton Christiansen (the same guy who wrote "The Innovator's Dilemma"). I'm not done with it yet but so far it goes into some interesting detail about how to reframe everything people pay for as jobs... and that building a successful business is about understanding the job to be done and mastering it.
24
MikeTLive 3 hours ago 0 replies      
the first thing that impacts your future success is the luck of the conditions of your birth. you have no control over this. hard work MAY make up for this, however having a "better" birth condition plus this same hard work does not negate the value of that first starting position.

this is lost on many successful people who wrongly attribute the entirety of their success to their own efforts and presume that anyone who is not successful has simply not worked hard.

25
jagtodeath 5 hours ago 3 replies      
Not super related but I cant resist. The guy in the article looks almost EXACTLY like Steve Jobs.
26
pier25 3 hours ago 1 reply      
This reminded me of the film "Match Point" directed by Woody Allen. IMO his best film.
27
minikites 5 hours ago 3 replies      
I think a lot of people are emotionally unable to deal with a world that is as dramatically unfair as ours is, so they fall back to the childish notion that people who have fallen on hard times deserve it and successful people controlled their own destiny to get there, because the alternative is too uncomfortable to think about.
28
swolchok 3 hours ago 0 replies      
Related reading: Fooled By Randomness, by Nassim Nicholas Taleb.
29
chrismealy 5 hours ago 1 reply      
Frank is a terrific writer and his books are excellent (rare for an economist).
30
andrewclunn 4 hours ago 2 replies      
A lot of this "luck" can be traced back very easily to causes like "had two parents who gave a damn" or "had enough to eat eat growing up." The people pushing this narrative that you're not really responsible for your failure / success want it both ways. They want to make you admit that you benefit from living in a peaceful stable society with infrastructure, while also not wanting to hold parents accountable for having too many kids too early, or admit that impact that divorce has on young children. It always comes down to pushing some narrative that is meant to justify further state intrusion into our lives and the dismantling of the family unit, all with pseudo-scientific (see "the gray sciences") justifications and emotional appeals. Spare me the bullshit, I aint buying it.

EDIT -

Looking for another example of this obvious propaganda? Try the latest episode of RadioLab:

http://www.radiolab.org/story/radiolab-presents-media-busted...

27
Things I learned creating my own Messenger chatbot kilianvalkhof.com
175 points by kilian  13 hours ago   14 comments top 7
1
donmatito 11 hours ago 1 reply      
In general, people interested in bots should not put too much emphasis on the chat part of chatbots. Most of the value will come from the frictionless, social experience. Nothing to download, no app switching, and your friend are already there.

Media is hyping the AI part because it is catchy, but using mostly/only quick replies and buttons gives a much better UX IMHO. So, fully agree with #4.

There are only a handful of cases where text actually reduces friction instead of increasing it. Entering a date, for example : "in 1h", "tomorrow morning" or "next friday" is really faster than other input types.

For most other cases, Messenger bots just got access to webviews. Really under-utilized so far, I think.

2
TeMPOraL 12 hours ago 2 replies      
A hint for #1: don't assume a "like" is a like.

The default emoji in a conversation can be changed by either side, on demand. So don't be surprised if suddenly you start getting a scared cat, or a tomato, instead of a 369239263222822. Why would people change the default emoji when talking to a bot? Why not?

Hint #2: default emojis can be sent in several sizes, depending on how long the user holds the "send emoji" button. From what I can tell, 369239263222822 is a small like. There are at least two other sizes. This is not some obscure feature, people use it all the time, so be prepared.

--

In general, it's worth to pay attention to details of the platform one's developing for ;).

3
edshiro 7 hours ago 1 reply      
I particularly liked this point: "Add a natural delay between messages to keep your chatbot from feeling mechanical" . While I know in the back of my mind that I am not chatting to a real human being, I would feel surprised and less engaged if I received instantaneous answers from the bot.

Having a delay therefore sounds like the right thing to do: you can also trigger the (...) in Messenger as your bot is preparing the answer.

I see chatbots being used from gaming to financial advising, not I am pessimistic on the AI side of it or the bot being able to chat to you as if it were a real human being. My gut feeling is that chatbots will become extremely specialised and may excel in a given domain.

4
DanHulton 9 hours ago 0 replies      
Oh wow, this is interesting timing. I'm actually writing a fantasy game for Slack using their bot interface, and a lot of the lessons described here are things I picked up as well while showing to friends.

Slack offered buttons shortly after I started work, so I switched from trying to interpret text to offering clear buttons instead, and yeah it makes a huge difference. I know I was into MUDs a whole lot in my youth, but folks today expect a more intuitive interface.

It's nice to see I'm not the only one interested in bringing a little levity to the world of chat bots!

5
donatj 5 hours ago 2 replies      
I don't get the draw. None of the chat bots I've used seem that much more useful than an AIM or IRC bot of yesteryear. I'd rather click a button for an action than have to type something out. Seems like a gimmick to me.
6
Karrot_Kream 12 hours ago 1 reply      
A lot of the same points apply with Telegram bots, though I suspect the Telegram user ecosystem is a lot less ... refined ... than the average FB Messenger ecosystem.
7
akjainaj 10 hours ago 0 replies      
>1. People send a like/sticker as a conversation starter

I do that! When you open a conversation with a bot sometimes you don't know what the bot expects you to do to start a conversation, so instead of writing gibberish and expect a help text, I tap "like" because it's only one tap.

28
Removing Python 2.x support from Django for version 2.0 github.com
680 points by ReticentMonkey  15 hours ago   348 comments top 24
1
Flimm 14 hours ago 3 replies      
The next release, Django 1.11, will be a long-term support release, and the one after that, Django 2.0, will no longer support Python 2.

https://www.djangoproject.com/weblog/2015/jun/25/roadmap/

I've grow to highly respect the Django project for its good documentation, its healthy consideration for backwards compatibility, security, steady improvements and all round goodness.

2
rowanseymour 7 hours ago 0 replies      
I'm glad they are making a clean break from Python 2 and I hope this pushes other projects in the ecosystem to fix those remaining libraries without Python 3 support. It does get a bit frustrating when things break between Django releases, but they have a good system of deprecating things for a couple of releases beforehand. And at the end of the day, Django is for people who want to build websites, not life support machines... and I think they're doing a decent job of striking a balance between breakage and stagnation.
3
yuvadam 14 hours ago 3 replies      
This call has been made a while back, and it makes perfect sense. Python 2 is slowly being EOL'd and if you're starting a brand new Django project there's no reason on earth you should choose Python 2 anymore.

Sure legacy projects still need support and for that they get the 1.11 LTS, but otherwise it's really time to move on.

4
erikb 38 minutes ago 0 replies      
There are only two possible opinions here:

A) You mostly have Python3 projects: Then you like it because you know more ressources will be spent on your pipeline and having more Py3 packages is also helpful.

B) You still have Python2 projects: You hate it, because it pushes you out of your comfort zone.

But I have to say, we want our langauges to develop as well. We want our packages to get attention. And there was lots of time to switch and experiment with switching. Ergo, it should happen. Even if you don't like it as much, that's where things are heading. Deal with it, move on. Let the community help you, if necessary.

5
nodamage 12 hours ago 8 replies      
I have a Python 2.7 project that has been running smoothly for many years now and I'm having trouble finding a reason to upgrade to Python 3. The project uses the unicode type to represent all strings, and encodes/decodes as necessary (usually to UTF-8) when doing I/O. I haven't really had any of the Unicode handling problems that people seem to complain about in Python 2.

Can someone explain what benefit I would actually gain from upgrading to Python 3 if I'm already "handling Unicode properly" in Python 2? So far it still seems rather minimal at the moment, and the risk of breaking something during the upgrade process (either in my own code or in one of my dependencies) doesn't seem like it's worth the effort.

6
stevehiehn 13 hours ago 5 replies      
Good. I've been getting into python a bit because i have an interest in datascience. I'm mostly a Java dev. I have to say the python2/3 divide is a real turn off. Many of the science libs want to use seem to be in 2.7 with no signs of moving.
7
misterhtmlcss 4 hours ago 1 reply      
Is anyone going to talk about what this means for Python and Django? I read the first 30-40 comments and they are all about off topic stuff related to Django, but still the core premise is the committed move to Python 3.x going forward.

What do people think of that?! I'm a newer dev and I'd really really love to hear what people think of that and what it means for the future rather than side conversations about how bad their API is, how good it is, how good their Docs are and how bad they are.... Blah blah.

Please!! This community is filled with some of the most brilliant minds and I for one don't want to miss out on this chance to hear what people think of this change.

Please please don't reply that you disagree with my POV. That's irrelevant, but please do if you are interested in the initial topic. I'd be be very excited to hear your thoughts.

So Django moving to Python 3.X Go :)

8
oliwarner 5 hours ago 0 replies      
A whole pile of people complaining about upgrading Django highlights two things to me:

Not enough people are using tests. A decent set of tests make upgrade super easy. The upgrade documentation is decent so you just spend 20 minutes upgrading broken things until it all works again.

People pick the wrong version. I've seen people develop and even deploy on -dev and it makes me cry inside because they'll need to track Django changes in realtime or near enough. Pick an LTS release and you get up to three years on that version with security and data-loss upgrades and no API changes.

9
myf01d 8 hours ago 1 reply      
I hope they just find a way to support SQLAlchemy natively like they did with Jinja2 because Django ORM is really very restrictive and has numerous serious annoying bugs that have been open since I was in high school.
10
gkya 13 hours ago 0 replies      
This is a nice patch [1] to review for Python coders. Seems to me that most incompatibilities are provoked by the unicode transition.

[1] https://patch-diff.githubusercontent.com/raw/django/django/p...

11
karyon 13 hours ago 0 replies      
The related django issue is here: https://code.djangoproject.com/ticket/23919

there are lots of other cleanups happening right now. It's a real pleasure to look at the diffs :)

12
gigatexal 7 hours ago 0 replies      
This is great news. It will help move people off their python 2 code bases even more. Kudos to the Django team.
13
ReticentMonkey 15 hours ago 0 replies      
14
mark-r 7 hours ago 1 reply      
I was surprised to see the elimination of the encoding comments, I thought that the default encoding would be platform dependent. After a little research I found PEP 3120 which mandates UTF-8 for everybody, implemented in Python 3.0. It also goes into the history of source encoding for 1.x and 2.x. I wonder why there aren't more problems with Windows users whose editors don't use UTF-8 by default?
15
Acalyptol 12 hours ago 2 replies      
Time to introduce Python 4.
16
karthikp 13 hours ago 2 replies      
Oh boy. And here I am still using Py2.7 with Django 1.6
17
romanovcode 13 hours ago 1 reply      
Good, it's about time this nonsense ends.
18
ReticentMonkey 11 hours ago 1 reply      
Can we expect the async/await introduced from Python 3 for async request handling or maybe some heavy operations ? Something like sanic: https://github.com/channelcat/sanic
19
alanfranzoni 14 hours ago 7 replies      
So, after a poor evolution strategy that lead the Python world to be split in two and forces maintainers to offer two versions for the same library, and upstream maintainers to offer support for two different python versions, the same is happening for Django!

I speculate that the latest Django 1.x will remain used - and possibly the most used - for a lot, lot of time.

20
daveguy 10 hours ago 1 reply      
Seriously? The entire change to "unsupport" the majority of Python code is a mass delete of from __future__ import unicode_literals and utf-8 encoding? Is that really the extent of the "too difficult to maintain" code? There will be a split.
21
scrollaway 12 hours ago 2 replies      
Oh my god stop. You're all over this thread. What bit you?

This is the price you pay for staying on an old version. You do not get to stick to an old version AND demand that others do too.

You CAN stay on Python 2. You CAN stay on Django 1.11. It's LTS. So is Python 2.7. You get to use both until 2020 with no issues. After that, not upgrading is a technical debt that will start to accrue, faster and faster as you can no longer use recent versions of various software.

You are free to make your infrastructure immutable; you then become responsible for it of course. And the money you're not willing to spend porting to Python 3 today will be money you spend on costs related to being on outdated infrastructure, years in the future. That's a tradeoff. Banks do it a lot I hear. A bunch of companies still use ancient hardware and technologies nobody would think of starting a business with today. These companies make billions.

You know what the employees of these companies aren't doing? They're not bitching on HN that the tech they're using is no longer supported.

22
jdimov11 12 hours ago 3 replies      
Says who?? Someone with delusions of grandeur, obviously. Because that's not up to anyone to say. Python 2 is obviously NOT going away any time soon. You can't just look at reality and claim the opposite just because it pleases you. Python 2 is here to stay and is in MUCH better shape than Python 3, in terms of actual production usage globally. Python 3 is a bad joke that someone wants to force down people's throats for NO good reason at all.
23
belvoran 8 hours ago 0 replies      
A VERY GOOD NEWS!!!

Yea, I know, shouting is not the best thing, but this is a really good news.

24
jonatron 10 hours ago 0 replies      
Django was designed for making content based sites and CMS's quickly. It wasn't designed for webapps and REST APIs, and it can be used in those cases, but it's not great. I'd look at other options.
29
Automatic HTTPS Enforcement for New Executive Branch .gov Domains cio.gov
75 points by konklone  5 hours ago   56 comments top 9
1
Bartweiss 4 hours ago 0 replies      
This is fantastic news.

It wasn't that long ago that I tried to log into a government site via my SSN, and discovered that the page didn't even permit HTTPS. I was displeased, to say the least; logging in wasn't exactly optional, so it seemed much worse than a business offering poor security.

Permitting HTTPS is obviously the first step, but security shouldn't be limited to people with the expertise to seek it out. I'm really glad to see that something as inescapable as the .gov domain will be pursuing security-by-default.

2
konklone 4 hours ago 3 replies      
Co-author of the post here, happy to answer questions. =)

This is a GSA initiative, not an 18F initiative. But 18F has a recent post detailing executive branch progress on HTTPS that may also be relevant:

https://18f.gsa.gov/2017/01/04/tracking-the-us-governments-p...

3
3pt14159 2 hours ago 0 replies      
If anyone works in the Canadian government and wants my input in getting the political support to make this happen in your department, I've been helping some departments understand the nature of the risks (some are even paying me as a consultant!) of MITM attacks. It's taking time, but I'm slowly seeing improvement. I can give you some tips as to how to properly communicate the importance of some of these and other measures (like getting monitors like Appcanary installed to watch for security vulnerabilities).

My email is in my profile :)

4
Godel_unicode 2 hours ago 1 reply      
I said something similar in a reply below, but I find it interesting that this amounts to a .Gov-wide decision that availability is always less important than confidentiality and integrity.

While that's probably valid in the main, is that always true? FEMA/NOAA spring to mind. As does IRS guidance, especially since those documents should have digital signatures themselves for an additional layer of integrity.

Was this idea part of the discussion?

5
hannibalhorn 1 hour ago 1 reply      
From what I gather, Let's Encrypt meets the guidelines to be considered acceptable, but is not really mentioned anywhere, neither in the linked page nor on https.cio.giv - is there any feeling one way or the other on the use of Let's Encrypt for .gov?

Certainly one of the biggest headaches of the classic approach is forgetting to renew your certificate on time, a situation which Let's Encrypt effectively avoids.

6
t0mas88 4 hours ago 3 replies      
As a practical question: what is the expected capacity of the preload stores of browsers? Hundreds of thousands, millions or much more domains? Because at some point it seems like everyone with moderately high security requirements may want to have their certificates pinned / preloaded.
7
excalibur 3 hours ago 1 reply      
Unable to click through certificate warnings = completely inaccessible when there is an issue with certificate validation. Look at the shiny new attack surface!
8
prodtorok 2 hours ago 1 reply      
How has this been enforced? and what about sub-domains?
9
cakeface 3 hours ago 5 replies      
What are the odds that the private keys for all of the .gov domains are also sent to the NSA? I guess if you are worried about another nation spying on your traffic you would be fine. I would expect that all of this traffic is decryptable by NSA though.
30
The Impacts of Video Games on Cognition (2015) [pdf] wisc.edu
88 points by lainon  11 hours ago   66 comments top 12
1
WA 10 hours ago 17 replies      
Having only read the abstract and key points: The question is if video games have a better positive impact on cognitive abilities than other things you could do in the meantime that may also improve other abilities as well.

I write this because I quit video games almost entirely a year ago. I played probably 500-700 hours of video games in 2015. I played occasionally simple games on my phone in 2016.

The major downside for me was that I felt video games only improve my ability to play video games. I favored online games over single player games, but at the same time, I felt the addiction behind it. I can't play video games "for fun". I want to be good and that costs time. Video games have the downside of giving players easy rewards. I think it does something bad to your brain. Not necessarily altering it forever, but at least turning one into a little dopamine junkie.

In 2016, I spend about 40 times 3 hours in a private art school. I improved my drawing and painting skills a lot in merely 120 hours. That was a much better investment of my time than video games and I wonder if the forced concentration of looking at objects and colors for 3 hours straight had comparable improvements of cognitive abilities.

Anyways, what I want to say is this: If you feel that you're wasting too much time on video games, despite what research tries to spin positively about it, try to quit them and see what happens.

2
xherberta 9 hours ago 1 reply      
Summary:... we need more research... existing studies have flaws... guidelines for kids aren't even based on research... what gov't agency should regulate games? ... some agency should regulate brain-training games.

Here's the gem:

When it comes to surgeons, "cross-sectional research shows action video game experience is a better predictor of positive surgical outcomes than years of training or number of surgeries performed."

3
greenail 7 hours ago 0 replies      
It would be more interesting to compare playing video games to other pastimes such as golf, coding, reading, watching tv...

I'm surprised no one else noticed the conflict of interest disclaimer:

A. R. Seitz is a founder and stakeholder in CarrotNeurotechnology, a company that sells a vision brain game calledULTIMEYES. Carrot, and Seitz as an individual, are involved in acase with the FTC regarding advertising claims that Carrot madebased upon Seitzs University based research.

4
Agentlien 8 hours ago 0 replies      
Before I started working in AAA game development I spent a few years working in the field of virtual training simulators for laparoscopic surgery.

In that context there were several similar studies. We mainly focused on those performed in order to validate skill transfer from our own training simulation to the operating room.[1][2][3]

However, there were also studies which showed that doctors who had spent a lot of time playing video games in their spare time performed better in the operating room, as well.[4]

[1] Skills acquired on LapSim transfer into the operating room, Gunnar Ahlberg et al., The American Journal of Surgery, 2007:193, p797

[2] Novice performance level bypassed by VR simulation training, Christian Rifbjerg Larsen et al., British Medical Journal, 2009:338, p1802

[3] VR laparoscopic training outperforms traditional curriculum, Vanessa Palter et al., Annals of Surgery, 2013:257, p 224

[4] The impact of video games on training surgeons in the 21st century, https://www.ncbi.nlm.nih.gov/pubmed/17309970

5
matheusmoreira 9 hours ago 0 replies      
There are genres of video games much more deserving of strict regulation than "brain-training" games which may or may not be backed by science.

Way too many mobile games I've played function like casinos. Their goal is to make players spend as much money as possible and the game design reinforces such behavior and also keeps the player coming back like an addict attempting to avoid the negative symptoms of withdrawal.

When it comes to the safety of players, I think these money sinks are a much bigger concern than games that may or may not enhance cognitive abilities.

6
jack9 2 hours ago 0 replies      
My experience after over 35 years of hardcore gaming (since I first saw pong).

Largely, video games are about linear optimizations with multiple complex variables in a highly dynamic (borderline chaotic) environment. This approaches the complexity of day to day life, as many people can understand their situation in life. Unlike life, there are known bounds for most games or it takes a finite amount of time to explore these bounds to sufficient understanding to progress in some metric (usually winning ratio, but also APM or K/D ratio).

Frustration is largely a result of RNG punishment (that can FAIL? Why did my mats disappear after my 4 hours of grinding? etc) or participation failure (there were 3 of you and you couldn't handle 1?) or mechanics failure (imba <mechanic> just wrecks everyone) which includes nerfing a mechanic a player relies on. Every now and then you will also see player frustration related to a lack of dextrous skill (bullet hell shooters, fps games, rts apm, etc) or understanding (this game's <x> is just broken! - even though the game is founded around that mechanic being the strongest - i.e. Chess/Queen, Counter Strike/CT pistols > glock).

Like text puzzles or mindgames, video games allow for people to constantly develop strategies and exercise them for a complexity level that is uncommon in human experience. In group settings, the gamers generally come up with solutions to problems (of various quality) quickly and decisively, then will later discuss deeper strategy. Video game trial-and-error trains for that kind of approach and lots of playtime tends to treat indecisiveness as disadvantageous.

Does it make you smarter? Not that I have seen. Does it make you faster? Yes, it trains you to optimize your reaction time at a moment's notice and there is a dopamine feedback loop, even if you are a poor performer. Does it help you long-term in staving off cognitive decline? Probably not directly, but there was a study about constantly challenging your mind to combat decline, iirc.

Those are my thoughts.

7
TulliusCicero 8 hours ago 1 reply      
It's funny, because we'll talk all day long about how (physical) play is so important to a child's cognitive development, never considering that maybe mentally-focused play is also beneficial. The very fact that video games are so compelling/addictive has given them the reputation of being nonproductive, so we tend to assume that they must give little or no benefit to the individual player.
8
edpichler 27 minutes ago 0 replies      
Guys, what tool usually is used to create beautiful papers like this, Latex or Adobe Indesign, or other?
9
ismail 8 hours ago 1 reply      
I wonder what the impact of pay to win type games?

Had an interesting discussion with my bro who is into gaming he says most of his peers are playing "pay to win" the games are only superficially based on skill and the only way to progress, compete and win is buying upgrades.

I stopped playing games seriously a while back so a bit out of the loop.

The games are rewarding $$$ over putting in the effort/time to learn.I'm curious about a few things:

What are the effects of these types of games?

Could it discourage effort, make people believe they can buy their way to skills without effort?

What are the impacts of constant gamification? The app stores are filled with toddler games heavily gamified. Are we creating a generation of people that will be purely extrinsically motivated?

10
openfuture 7 hours ago 1 reply      
I was a severe world of warcraft addict and I'm pretty sure it had a permanent effect on my cognition, whether that effect was 'bad' is an open question but there's no doubt in my mind that video games have an enormous effect on your way of thinking.
11
rdiddly 7 hours ago 1 reply      
Using Visual Studio is kind of like a simple video game. Make the red underlined errors go away! Pew! Pew! Select the right intellisense on the fly to complete the puzzle! Deploy the right code snippet for the situation!

Pretty sure it's enhancing my cognitive abilities too.

12
mememachine 4 hours ago 0 replies      
why are they interested in regulation?
       cached 19 January 2017 23:02:02 GMT  :  recaching 30 mins