hacker news with inline top comments    .. more ..    8 Mar 2017 Best
home   ask   best   2 years ago   
CIA malware and hacking tools wikileaks.org
2270 points by randomname2  16 hours ago   845 comments top 2
abandonliberty 8 hours ago 26 replies      
It's interesting to note that Julian Assange didn't demonstrate control of the wikileaks private key during his Reddit AMA 1 month ago: https://www.reddit.com/r/IAmA/comments/5n58sm/i_am_julian_as...

Considering the political situation unfolding in the US and who this leak weakens, there is some evidence that wikileaks is not in the hands of a neutral party.

There is clear motive right now for undermining the CIA. This may not have been an act of altruism like Snowden. While shockingly damaging to the American arsenal, the CIA is by far the biggest loser.

This comment was immediately down voted on Reddit. Someone is seeking to control the narrative.

apo 14 hours ago 9 replies      
In what is surely one of the most astounding intelligence own goals in living memory, the CIA structured its classification regime such that for the most market valuable part of "Vault 7" the CIA's weaponized malware (implants + zero days), Listening Posts (LP), and Command and Control (C2) systems the agency has little legal recourse.

The CIA made these systems unclassified.

Why the CIA chose to make its cyberarsenal unclassified reveals how concepts developed for military use do not easily crossover to the 'battlefield' of cyber 'war'.

To attack its targets, the CIA usually requires that its implants communicate with their control programs over the internet. If CIA implants, Command & Control and Listening Post software were classified, then CIA officers could be prosecuted or dismissed for violating rules that prohibit placing classified information onto the Internet. Consequently the CIA has secretly made most of its cyber spying/war code unclassified. The U.S. government is not able to assert copyright either, due to restrictions in the U.S. Constitution. This means that cyber 'arms' manufactures and computer hackers can freely "pirate" these 'weapons' if they are obtained. The CIA has primarily had to rely on obfuscation to protect its malware secrets.

One of the more interesting passages. The arsenal must not be classified to protect those who deploy it from legal action. This cyberwarfare kit, which can just as easily be used to destroy the US as one of its enemies, is public domain software created and released at US taxpayer expense.

How Uber Used Secret Greyball Tool to Deceive Authorities Worldwide nytimes.com
1135 points by coloneltcb  4 days ago   765 comments top
SilasX 4 days ago 14 replies      
It took me about 8 paragraphs in to figure out what Greyball is, so to save you the time: Uber used various data sources to identify which people are likely government officials who are trying to collect incriminating data on them, and then blocks them from the service so they can't be caught in sting operations.

But there's a lot in the article that doesn't make sense:

>Other techniques included looking at the users credit card information and whether that card was tied directly to an institution like a police credit union.

I couldn't find a good source, but it doesn't seem like that's something a CC merchant would have access to. Do they really get to see that?[2]

Also, how were they able to do it so accurately without disrupting their service? Most city employees and police aren't going to be involved in sting operations against car services, so their customer support will have to deal with a torrent of very confused government employees [1] who keep getting mysterious rejections when they try to use they app, and which support can't give a truthful answer on.

Plus, this seemed to require significant on-the-ground intel and human intervention:

>If those clues were not enough to confirm a users identity, Uber employees would search social media profiles and other available information online. Once a user was identified as law enforcement, Uber Greyballed him or her, tagging the user with a small piece of code that read Greyball followed by a string of numbers.

So, I'm surprised it worked at all.

[1] identified by the fact of that person having more-than-usual activity inside something recognized as a government building

[2] EDIT: Okay, I get it -- you can look up banks from the CC number. Can we not have further comments just to point this out?

94-year-old Lithium-Ion Battery Inventor Introduces Solid State Battery utexas.edu
1161 points by andruby  5 days ago   276 comments top 12
hwillis 5 days ago 14 replies      
I'm scanning the paper really quickly. I'm not a chemist but I do know a thing or two about batteries and the standard caveats apply here:

When they say 3x volumetric energy density, that is the actual energy density, which is energy per liter (normal density is mass per liter). Normally people use energy density to refer to energy per kg. Because this is a solid state battery, it is much denser than normal batteries (which are roughly as dense as water). Solid state batteries are smaller but much heavier and this is no exception. It is 33% the size of a lithium battery, but for the same energy it's about 2.5x heavier. Weight is still a much bigger problem for batteries than size- batteries are much smaller than the exhaust, engine and transmission of a car, but also much heavier.

The main limit on specific energy(kwh/kg) for this battery and for solid state batteries in general is voltage. Li-ion is 3.7v nominal, this battery is 2.5v nominal.

1,200 cycles may seem low, but it is actually very good; around 3x the life of current batteries. This cycle life is the time to degrade to 80% maximum storage, at a certain discharge depth and speed. Current batteries only last 300-400 cycles at their specs, but last tens of thousands at 30% depth of discharge.

Problem with the above: in this particular battery, the chemistry breaks down very strongly after it reaches the end of life. Normal lithium does this too, but not as strongly. This stuff may potentially last longer, but it fails much less gracefully. Not in a dangerous way, but in the same way as a normal car often does; once its broken it'll just work worse and worse until it is barely limping.

The temperature capabilities may seem irrelevant, but they are actually a decent problem for li-ion and are the reason lead acid is still used in cars.

Another interesting possibility for glass solid state lithium batteries is that recycling would be very easy. In organic batteries the electrolyte burns or reacts pretty much no matter what you do, but with glass you can plate and unplate cells. Unfortunately due to specific energy, polymer solid state electrolytes are much more likely than glass (also much cheaper).

Edit: IMPORTANT NOTE: this is NOT a fundamentally new type of li-ion battery! Solid state batteries have been around a while (glass, ceramic and polymer), and have specific advantages but low specific energy and power. This particular implementation is a bit higher power and possibly lower cost, but it's just a little blip of progress. Solid state batteries are a good candidate for the future, but they aren't there yet.

nimos 5 days ago 3 replies      
The greatest travesty in the world right now is there doesn't exist a multi-government 10xManhattan sized project to develop a really really good battery.

It would enable a huge reduction in CO2 emissions(bye gasoline), allow developing countries have stable electrical sources and low cost renewable distributed generation, cut costs(good includes cheap), and enable renewable energy sources to make up a larger mix of our energy production.

Always exciting read about new developments, wish I knew more about chemistry/physics, hopefully we get there sometime soon!

danm07 5 days ago 7 replies      
Is it just me or is Goodenough an amusingly paradoxical last name for an inventor?
philipkglass 5 days ago 1 reply      
The press release says "the researchers cells have demonstrated more than 1,200 cycles with low cell resistance." That's nice, but rising cell resistance is just one way the battery might cycle poorly over time.

Skimming the actual paper, I don't think they demonstrate 1200 cycles for any parameter. Eyeballing the graphs, it looks like they did charge/discharge testing for maybe 1200 hours (Figure 3a), but with really slow charge/discharge cycles of 10 hours each. Anyone publishing in this space would highlight 1200 cycles of stable cycling in the actual paper, if they had data to demonstrate it. They'd also show off faster cycles if high-rate performance looked good.

Looking at the data the authors did not present in the actual paper, I'm guessing that this battery doesn't handle rapid charge/discharge cycles well. (Their cycling test is at only 0.1C, paper claims "acceptable charge/discharge rates" but does not further quantify it... implication is "not a strength of this design.") It may not have great capacity retention either. I don't see any graphics specifically highlighting capacity retention vs. cycles. So at present I'd call this a solid research effort, but even if it could be commercialized immediately it's not clear that it would be a winner. The demonstrated charge/discharge rate is too slow to be practical for EVs or portable electronics. The demonstrated cycling stability is too low to be attractive for grid tied storage.

People who found this paper interesting may also be interested in this related publication about solid state sodium ion batteries that Goodenough was also involved with: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5269650/

ChuckMcM 5 days ago 2 replies      
I have put a note in my calendar a year from now, and a year after that to read the story about how these laboratory curiosities could not be made in production quantities.

Something I would really love to see would be a solid state battery that exploited the fact that we can draw silicon features at 7nm. How about a couple trillion equivalents of a FLASH cell which we can drain in rows or fill in rows. Sort of a bucket brigade of capacitors at that point but it would not have any recharge issues and since its just charge flying about no dendrites to speak of. At some point I predict that will become a useful way to build energy storage devices.

andruby 5 days ago 1 reply      
The researchers published the paper, titled "Alternative strategy for a safe rechargeable battery" in the Energy & Environmental Science journal [0]

[0] http://pubs.rsc.org/en/Content/ArticleLanding/2017/EE/C6EE02...

rebootthesystem 5 days ago 3 replies      
A comment that I think is relevant in a community of startups that often, explicitly or de-facto practice age discrimination:

This engineer is 94 years old. Few, if any, SCV startups would have hired him. Yet, here we are, he might have just developed the key technology to push electric transportation through the hockey stick curve past the inflection point.

Three times the energy density, many times more cycles, no shorting, high charge and discharge rates. Yeah, this is more than just about laptops and cars, this is about planes, boats, trucks and ships.

Think about that before you reject a 50+ year old. Experience has real value.

bluejekyll 5 days ago 2 replies      
> Braga began developing solid-glass electrolytes with colleagues while she was at the University of Porto in Portugal.

Slightly orthogonal to the content: this is why it's important for the US to allow people to come to this country. Continuing research and development with the worlds best gives the US a leg up. Past ease of immigration is the reason the US has led in so many areas, constant flow of new and innovative ideas.

rhodin 5 days ago 1 reply      
Interesting footnote: "The UT Austin Office of Technology Commercialization is actively negotiating license agreements with multiple companies engaged in a variety of battery-related industry segments."

I recommend the book "The Powerhouse: America, China, and the Great Battery War"[1], it goes into some detail about the legal issues around (one) li-ion design and Thackary's work with/without Goodenough and attribution.

[1] https://www.amazon.com/Powerhouse-America-China-Great-Batter...

bitmapbrother 5 days ago 1 reply      
I was watching a Nova episode on batteries and they showed a new type of Lithium Metal battery that used a plastic material that separated the positive and negative halves. The unique thing about this new Lithium Metal battery was that it was not prone to the existing problems of Lithium Metal batteries and did not catch fire when punctured. They even started cutting portions of the battery off with a scissor and it continued powering an iPad.
agentgt 5 days ago 0 replies      
I have not examined the article yet but I was somewhat impressed with Mike Zimmermans work: http://www.pbs.org/wgbh/nova/next/tech/new-damage-proof-batt...

Yes the linked Nova show has that goofy host but it still pretty interesting and is young adult / kid accessible.

I highly recommend seeing the whole Nova show if you can.

logfromblammo 5 days ago 1 reply      
Basically, the battery has a lithium (Li), sodium (Na), or potassium (K) metallic anode, a solid glass electrolyte, and an "ink" cathode of sulfur (S8), ferrocene (Fe(C5H5)), or manganese oxide (MnO2) on a copper (Cu) current collector.

So it's not entirely solid. The cathode may have a tiny bit of liquid electrolyte between the glass and the copper.

View a helpful diagram at http://www.greencarcongress.com/2016/12/20161213-braga.html .

Images and video showing extent of Oroville dam damage imgur.com
840 points by JabavuAdams  4 days ago   126 comments top 21
albeebe1 4 days ago 7 replies      
I really like the way those videos were presented. Silently, looping, laid out one on top the other, each with a one line description. If that was one long video i would have kept jumping backwards to rewatch all the scenes. Final thought, drone footage continues to blow me away.
patrickg_zill 4 days ago 2 replies      
Oro = "gold" ville = "town"

California had a gold rush - part of the gold mining was done using "hydraulic mining", where you use water to scrub away rock and direct it into a channel for the runoff to be run through mining equipment.

Be sure, there are plenty of gold prospectors downstream panning or perhaps even using metal detectors in the areas where the waters have receded, looking for golden flakes ("flour") and nuggets.

I don't know the exact law in CA but in general you can use human-powered methods without mechanical assistance, to pan for gold (basically what looks like a pie-pan which has ridges inside to catch the heavier gold).

wgyn 4 days ago 3 replies      
The Oroville Dam is really a crazy undertaking when you consider: 1) it primarily serves to move water all the way from north of Sacramento to San Joaquin Valley / Southern California, 2) despite generating some amount of electricity, it actually consumes 3/4 of that energy just in transporting the water (over mountain ranges). More generally, the State Water Project (of which the dam is a part) is the largest electricity consumer in the state of California.

Source: https://www.amazon.com/Introduction-California-Natural-Histo... (kind of "dry", but extremely informative)

marze 4 days ago 4 replies      
What isn't discussed much in the press is how close they came to the largest dam disaster in US history.

As you can see from the pictures, the "bedrock" under the emergency spillway erodes quickly, even with a modest amount of water flowing over it. How would it have handled a flow 100x larger, if the spillway structure had collapsed?

It could have easily eroded down 100 or 200 feet under the intense flow. That would have discharged 50% of the total reservoir volume, destroying tens of billions in real estate and possibly killing thousands.

Fortunately, the dam inflow slowed enough, combined with the main spillway outflow, to stop flow over the emergency spillway before it collapsed. So lucky.

sikhnerd 4 days ago 0 replies      
Don't miss the March 1 update by the same guy also : https://imgur.com/gallery/6IyCi Really amazing to see
danbruc 4 days ago 4 replies      
Does anybody happen to know why the water flows down the spillway in waves? Best seen right in the first GIF. I am curious what the mechanism behind that is. Is this a general phenomenon of water flowing down an inclined plane or is it caused by other oscillations, waves on the reservoir, resonances in the outlet system, or something along that line?
ghubbard 4 days ago 1 reply      
The videos these clips are taken from can be found on the California Department of Water Resources YouTube page:


lvspiff 4 days ago 4 replies      
Seeing what was done here in the matter of days make you realize how things like the Grand Canyon formed over millions of years - then wonder how it isn't bigger. Can't imagine the wealth of knowledge that has been been gained by just watching the erosion patterns as the failure evolved.
ben1040 4 days ago 0 replies      
That scour reminds me of what happened after the Taum Sauk hydroelectric reservoir in Missouri failed in 2005, which sent 20 feet of water down a small mountain:


OrwellianChild 4 days ago 7 replies      
I'm trying to understand context around the design of the dam spillway vs. historical rainfall and usual flow...

Was this way more rain than it was designed for? Or was it structural failure within design tolerances that caused all the damage?

zodPod 4 days ago 0 replies      
Once the water goes away, the scene is reminiscent of something from Minecraft when someone gets too TNT happy... This is nuts thanks for sharing OP!
dredmorbius 3 days ago 0 replies      
As I've commented at length just now, this is an excellent set of visualisations of the damage, and history, of this story. It exceeds all media coverage I've seen of the event (though I've not canvassed all of that coverage).

I'd like to also present Wikipedia's article on the Oroville Dam Spillway Crisis of 2017, which is another example of exemplary coverage, and what's been a consistent model for me for vastly better coverage of ongoing large-scale events since the Boxing Day Earthquake and Tsunami of 2004, the first time I'd followed a major story by way of Wikipedia:


In roughly 25 paragraphs, plus an SVG image essay showing the progression of the damage, apparently original work by a Wikipedia editor, this spells out the background, event, development, implications, and history of the failure.

Both the Imgur essay and the Wikpedia article are vastly more informative than any news coverage I've seen. In the case of Wikipedia, much of that comes from its ability to synthesize information from multiple sources and place it in a coherent context. But in both cases, much of the value also comes from a focus on what I see as the salient factors, and an avoidance of fluff.

Taking a quick second look at the Wikipedia page, the one fault I'd give it is that for someone immediately affected by evacuation orders, there is insufficient information about what routes were recommended or deprecated. For that, some local news accounts (I'd read the Sacramento Bee and SF Chronicle's coverage in particular) was perhaps more useful, but only just.

I'd put hard to the press just what they see their mission as in reporting on such events. Because, much as I appreciate the media, they fell down here.

Addendum: Brad Plumer, at Vox, has previously caught my attention as an exceptionally good reporter. His article explaining the Oroville Dam crisis is cited by Wikipedia, and is itself also excellent. I'm calling it out specifically as an example of How to Do Coverage Right:


ruminasean 4 days ago 0 replies      
Wow. This was gorgeous, thorough and horrifying. Thanks for posting.
frik 3 days ago 1 reply      
Lot's of infrastructure was built in 1950s/60s/70s. A reminder if we let rot our infrastructure, it's not good at all. (not implying it's the case with Oroville dam, haven't followed all news articles) Many concrete and steel structures have a limited life-time of 60-100 years (or so). Big infrastructure projects were built with ease back then, compare it too overblown "paper work" nowadays (not speaking about engineering, but bureaucracy). We should really look how asian countries excel at building infrastructure nowadays (they nowadays build multi-level highways and bridges in no time with ease, whereas building a bridge/tunnels in the west takes many years and cost way too much) and look back in history how to maintain infrastructure well over a timeframe of a few centuries. How many 1950s/60s/70s structures (big things like skyscrappers, dams, bridges, tunnels, subways, etc) are a risk, need repair or are beyond repair and need a replacement - is there a map? Is there a documentation about this topic or some interesting research in this field?
olivermarks 3 days ago 0 replies      
the latest is that the banks of the Feather river have collapsed after the huge amounts of water receded. Amazing the power of water... "With high water no longer propping up the shores, the still-wet soil crashed under its own weight, sometimes dragging in trees, rural roads and farmland, they said.

The damage is catastrophic, said Brad Foster, who has waterfront property in Marysville (Yuba County), about 25 miles south of Lake Oroville.

The farmer not only saw 25-foot bluffs collapse, but also lost irrigation lines to his almonds. When the bank pulled in, he said, it pulled the pumps in with it. It busted the steel pipes.


ufo 4 days ago 3 replies      
That is the plan for the Oroville dam, moving forward?
jmspring 4 days ago 0 replies      
Currently living up in the Sierras along the Feather River, it's no surprise the amount of water that went rushing down into the Oroville reservoir. Even at about 4500 feet, you had about 3-4 feet of snow that ended up being melted by a significant amount of rain.

That is quite a bit of water rushing down that way.

Interesting times and the videos are quite educational.

abpavel 3 days ago 0 replies      
A classical "we've found a problem. let's fix it later, when it will be too late" approach to infrastructure.

I'm sure there will be dozens of "good reasons" why the repairs could not have been done in time, but the theme of the approach to infrastructure problems is disheartening.

nkkollaw 4 days ago 0 replies      
Nature is pissed off.
getpost 4 days ago 1 reply      
So they built a big strong dam and a "protected" it with a smaller, weaker dam. Seriously, what was the rationale for this design? No criticism or Monday morning quarterbacking here. I'm genuinely curious what the original engineers were thinking.
maxerickson 4 days ago 1 reply      
When the evacuation was announced I sort of hastily pasted into a chat channel that the dam was going to fail.

The main dam of course did not fail, but looking at the recent photos, the overall system sure did fail. I hadn't realized the extent to which the spillway had continued to erode.

Mathematics for Computer Science [pdf] mit.edu
1014 points by lainon  2 days ago   147 comments top 18
impendia 2 days ago 17 replies      
This fall I will be teaching the required "Discrete Math for CS course" to about fifty students at the University of South Carolina. Previously I used Epp's book [1] which in my opinion is an outstanding book but regrettably is $280.44. Many of our students are working minimum wage jobs to make ends meet, and I don't want to make them pay so much if I can at all help it.

Lucky I saw this!!

I do have one reservation though -- many of our students come in with a weaker mathematical background than MIT students; for example we spent several weeks doing proofs by induction (and no other kinds of proofs) and this text doesn't seem to feature a couple of weeks worth of examples.

I think I'll probably go with this and supplement as needed. Really it looks quite wonderful. (And hell, the book seems to be open source which would mean that I could potentially write supplmentary material directly into the book and make my version available publicly as well.)

This thread seems like a particularly good place to solicit advice: experiences with this book or others, what you wished you'd learned in your own undergraduate course on this subject, etc. I've taught this course once before -- I feel I did quite well but I still have room to improve. Thanks!

[1] https://www.amazon.com/Discrete-Mathematics-Applications-Sus...

lucb1e 1 day ago 7 replies      
This document is over 900 pages. How long are you supposed to take to read and understand all of this? And is this really all necessary?

On the surface this looks like an insurmountable task with questionable benefits. Don't get me wrong, but in the past 6 years of casual and professional programming, I've needed only a basic understanding of math, the most difficult thing being collision detection in games, and that includes doing RSA cryptography by hand. This paper starts off with proofs, something I've never had in school and which always seemed to me as if they only belong in scientific math papers, not practical life of someone doing computer science.

I don't mean to criticize the document or math in general, I would genuinely like to understand what 900 pages of this is going to bring me, especially when it starts with something I've never needed or seen outside of theoretical math discussions.

whitenoice 2 days ago 1 reply      
beisner 2 days ago 1 reply      
This book is a really accessible primer on the basics. Much more readable than a lot of other textbooks, which kind of go all-out in terms of rigor rather than present things intuitively. This was one of the textbooks we used in Princeton's introductory CS theory class (COS 340: Reasoning About Computation). The probability section is especially good.
bad_hairpiece 2 days ago 0 replies      
Well, shiiiiit. I came to Hacker News to avoid doing my discrete math homework. While this is number 1, guess I'll plug Fleck's textbook from UofI http://mfleck.cs.illinois.edu/building-blocks/
sjroot 2 days ago 1 reply      
I printed off the 2015 version (yes, the whole thing) and it has served as a very valuable reference. I wish there was a way to purchase a nice printed edition to support the authors. If they read this - thanks! Also, is a 2017 version of the course coming to OCW? I imagine that's why they'd release the updated PDF.
ralmidani 1 day ago 0 replies      
I'm still brushing up on Calc, but may have to learn discrete math on my own, since Harvard Extension School offers discrete and algorithms during the same semester, and I don't want to wait 2 years before I can take algorithms.

To that end, I bought a copy of the 1995 "C Edition" of Foundations of Computer Science by Aho and Ullman. It combines data structures and discrete math into one massive text.

Does anyone have an assessment of that book? Is the combined data structures and discrete math a good approach? Does the book hold up well against more "modern" ones?

ode 1 day ago 0 replies      
On this subject, has anyone taken this (https://www.coursera.org/learn/discrete-mathematics) MOOC on the same subject?


vayarajesh 2 days ago 6 replies      
Is this a good start for Math required for Machine Learning ?
hyporthogon 1 day ago 0 replies      
This is awesome, thanks.

Obligatory 'zoomout' recommendation: https://www.amazon.com/Mathematics-Content-Methods-Meaning-V..., which I learned about from HN (http://hackernewsbooks.com/book/mathematics-its-content-meth...). Wish I had read/pondered this before grad math classes.

kylemccann 12 hours ago 0 replies      
Does anyone know where I could find out more about Semantics, Operational Semantics, Denotation semantics and Type Systems?
kevindeasis 1 day ago 1 reply      
It depends on the person, but what is the median time in hours, for a person to finish this book?
treehau5 1 day ago 1 reply      
Donald Knuth's "Concrete mathematics" is also a really good one, and can be found used on Amazon for roughly 30 dollars.
aashu_dwivedi 1 day ago 0 replies      
I had used the second edition to refresh my probability before taking a class in AI and found the material succinct and very useful.It's only today that I found out that one of the authors is the CEO of the company I work for.

I work for Akamai Technologies and the author I am talking about is Frank Thomson Leighton.

40acres 2 days ago 0 replies      
Looks like an excellent resource. What would the prerequisite math skills be before diving into this work?
nvarsj 1 day ago 0 replies      
Any opinions on how this compares to the Rosen and Epp books? I'm looking for something to prep for a graduate level CS algo course.
voycey 1 day ago 2 replies      
I failed so hard at this in my CS degree :(
trueSlav 1 day ago 4 replies      
This is so stupid, the "primer on the basics" should be "Algorithmization for Computer Science", not a 900+ page math textbook (that doesn't even explore graph theory, calculus and differential calculus).
Snakisms pippinbarr.github.io
925 points by colinprince  1 day ago   133 comments top 51
jashkenas 1 day ago 2 replies      
This is delightful.

I only wish that someone would put it into a nondescript arcade cabinet in a hallway somewhere, so that people could stumble across it unprepared.

Edit: Naturally, it turns out that Mr. Barr has a PhD in this sort of thing, and that his thesis has to do with how we demonstrate our values during the course of playing video games. http://www.pippinbarr.com/academic/Pippin_Barr_PhD_Thesis.pd...

brilliantcode 1 day ago 1 reply      
Existentialism performed exactly like I imagined. Endless banality of reality symbolized by continuing borders.

I had to switch it off because I started to project myself to this small square pixel that elongates in the backdrop of pitch dark oblivion staring back into my empty soul.

kderbe 1 day ago 0 replies      
scardine 1 day ago 10 replies      
I guess I don't know enough philosophy to understand some of the jokes.

Anthropomorphism: the apple moves like the snake, man was made to the image of god and so on.

Apocalypticism: the game just ends after a few moves without notice.

Asceticism: the game ends if you eat the apple, you are supposed to be like a faquir.

Capitalism: you start the game with 50, spend 10 each apple you eat - when you are broke you can't afford the apple.

Casualism: I had to Google this one, the screen just flasheswith random squares.

Conservatism: just the plain old snakes game.

Determinism: the snake just moves by itself and you are unable to control the game - your destiny was set in stone the moment you were born.

Dualism: you can control the snake body with the regular controls, and you can move the snake mind with your mind. My mind is too weak so I was unable to move the snake mind.

Existentialism: you move the snake in a dark screen - after reading the wikipedia I guess the joke has to do with freedom in a meaningless world.

Holism: the whole screen moves with the snake (makes it very hard to get the apples in the corners)

Idealism: imagine you are playing a game of snakes

Monism: your play is not restrained by the walls - after reading it I guess the joke is about you being made of the same substance of god or something like that

Narcissism: when you finish the game it sends an email to the creator about how much you love his work.

Nihilism: just a black screen, no snake, no apples - nothing in the world really exists.

Optimism: you see apples everywhere but looks like they are not nourishing because the snake doesn't grow.

Pessimism: the play field is smaller and the apples appear outside of the walls where you are unable to reach.

Positivism: you see only a narrow part of the play field, I guess the joke is that you are unable to know the universe because our senses are limited.

Post-apocalypticism: no apples, you just move through a scrambled play field.

Romanticism: every time you eat an apple you see a cheeky statement like "food tastes like ashes when I'm not sharing it with you".

Stoicism: like a plain old snake game but you don't die when you hit the walls or yourself - after reading the wikipedia article I guess the joke is that virtue is sufficient for happiness, so the sage is immune to misfortune.

Utilitarianism: you have only two very narrow paths, one with 5 apples and other with one apple. If you take the one with more apples you win, otherwise you loose.

ronilan 1 day ago 0 replies      
Game URL from author's site:


Life is meaningless!

pippinbarr 1 day ago 1 reply      
Holy crap. So many comments. I'm going to read all of these!
msluyter 12 hours ago 0 replies      
Some other possibilities:

Socialism: there are two snakes, and the 10 points for eating an apple gets split between them.

Communism: the game starts as Capitalism with two snakes, but then the game switches to Socialism, and the 50$ (or remainder) gets expropriated and split between the two snakes.

Late Communism: Starts like Communism, but more and more points for apples get allocated to the AI snake (and fewer to you) because it's a member of the communist party.

Late Late Communism: Like Late Communism, but eventually no more apples appear. Game implodes and turns back into Capitalism, but the party member snake has all the money.

Platonism: another game runs in parallel alongside this one, but the snake is perfect and the highest possible score is obtained.

valine 1 day ago 1 reply      
I love how the optimisitic snake never gets any bigger.
diegorbaquero 1 day ago 3 replies      
The games awesome. Capitalism: "You can't afford the apple". Excellent. I didn't like the auto mail.
jug 1 day ago 2 replies      
I wish Pessimism would have let the snake move outside the constrained bounds. That would have told the player that pessimism may be painting a darker "reality" than what it really is.
justaguyonline 1 day ago 0 replies      
Beautiful, I loved the "value" aspect of it, but I found the simple, but good game design the best part. A lot of people know to have a chime play when you pick up an "apple" in games for pavlovian reasons, but fewer would add that nice hum that movement created in general. It makes moving through space both a visual and auditory experience.
dsego 1 day ago 4 replies      
How to get out of nihilism?
thomk 1 day ago 0 replies      
Suddenly I regret all of my lifes choices that led me to doing anything but becoming Pippin Barr.
mikeash 1 day ago 0 replies      
I started out by randomly clicking on Idealism and couldn't figure out what the joke was supposed to be. Trying another one made it click, though!
nsxwolf 1 day ago 0 replies      
Utilitarianism made me laugh.
thraway2016 13 hours ago 1 reply      
This is a blank white page in Chromium. No JS errors in the console. When switching from this tab to another tab and back again, the contents of the prior tab are partially clipped into a top-centered rectangle. No functionality or other behavior can be observed.
ctoth 1 day ago 2 replies      
I have no idea what this is, as the entire page just shows up as completely blank to my screen reader. Guessing some sort of game.
nojvek 13 hours ago 0 replies      
On a mobile phone. Using the swipe gesture was incredibly difficult. I missed a lot and quickly gave up.

My be just track motion of fingers for movement? It would provide a more time accurate interface. Otherwise it looked really cool. I love arcades.

pashariger 1 day ago 0 replies      
This is fantastic. Thank you!

Really enjoyed the Stoicism & Narcissism versions.

TazeTSchnitzel 1 day ago 0 replies      
This person's other work is great too. I'm currently deriving much amusement from Game Studies.
dahart 1 day ago 0 replies      
So good, abstract concept art in such a cute video game package. I can't play it for that long, but getting through the list feels more meaningful than if I'd spent two weeks mastering an amazing high score.

I think romanticism had me laughing the hardest.

simplehuman 1 day ago 2 replies      
Is there an online gorilla.bas :-) ?
rhardih 1 day ago 0 replies      
Nihilism made me chuckle.
overcast 1 day ago 0 replies      
Pretty rad existential, abstract art. Only missing the really abstract "kidism" in my opinion :) https://kidisms.com
AJRF 1 day ago 0 replies      
Here is what they all mean, saves you opening a bunch of tabs: https://airtable.com/shrfy8qWla6qIviin
soheil 1 day ago 2 replies      
Stoicism, you could never do anything to lose.
gpawl 1 day ago 2 replies      
Is Conservatism the same as the classic Snake game? That makes sense, but it would be interesting to see it contrasted against a Progressivism or Liberalism version.
Cofike 1 day ago 2 replies      
This reminds me a lot of these neat little socratic games.


noisy_boy 18 hours ago 0 replies      
Holism: some goals are just unattainable.

Optimism: as your progress in your life, it just fills up with music no matter whereever you go.

Stoicism: I think the snake should have kept the same size whether it ate an Apple or not.

hnal943 13 hours ago 0 replies      
I just found out this week that my wife has never played snake, and is unfamiliar with the concept.
gregorymichael 1 day ago 0 replies      
Narcissism is particularly well done.
RyanMcGreal 1 day ago 1 reply      
Positivism is particularly clever.
pippinbarr 1 day ago 0 replies      
Holy crap so many comments. I'm going to read them all.
kevindication 14 hours ago 0 replies      
Something I don't get to say every day: Nihilism made me smile.
jstrassburg 1 day ago 0 replies      
Excellent. Now I want to make one illustrating different logical fallacies.
lai 1 day ago 0 replies      
Wow! This is such a fun way to remember these concepts.
r0m4n0 1 day ago 0 replies      
Reminds me of the underlying themes behind Binding of Isaac. Also a brilliantly simple yet addictive game
saurik 1 day ago 1 reply      
The "swipes control snake" interface for this is extremely difficult to control... it seems like the swipes only register if you swipe while on the snake, which means to do fast movements you have to keep your finger hovering in front of the very thing you need to see constantly, and even then it isn't like a swipe is remotely a fast action you can perform (particularly as it seems to only register moderately long and deliberate swipes).
Fiahil 1 day ago 0 replies      
I wonder how many "narcissistic" email did he received..
Lapsa 18 hours ago 0 replies      
started with 'Nihilism' and thought 'what da hell?!!'
goldesel 1 day ago 3 replies      
Great game, but why does it consume soooo much CPU?
simondedalus 1 day ago 0 replies      
very good. dualism and pessimism particularly funny. like the overall design too.
kkajanaku 1 day ago 0 replies      
Monism is great.
revskill 1 day ago 0 replies      
How to win ?
behnamoh 1 day ago 0 replies      
Undoubtedly the best set of games I've played in so long!

Really great stuff and mind-tinkling :)


cust0m 1 day ago 1 reply      
no comunism/anarchism?
mk89 1 day ago 0 replies      
optimism! LOL!
hex13 1 day ago 2 replies      
some options like Apocalypticism seem normal, without any special effects.
alexmlamb2 20 hours ago 0 replies      
necessity 1 day ago 1 reply      
Site doesn't load without javascript.
Why I left Mac for Windows: Apple has given up char.gd
683 points by shlema  2 days ago   749 comments top
jmcdiesel 2 days ago 27 replies      
I used to be hardcore windows guy...

Then 10 years ago I got a mac. I never went back..

But what am I saving money for right now? To build a nice PC again.

Mostly because of the exact reasons in the article.

I have a fondness for apple... but they have definitely lost their way. First, they were a computer company driven by a man who loved computers ("first" here is the Jobs return era) ... then they became a Computer company who also made a phone. Then they became a computer company who also made a phone and a tablet. Then they became a phone company who also made computers and tablets.

Now they are a phone company who presides over the death throws of an amazing operating system that is going to be killed off to make it more like a phone. The new "features" every cycle are more "lets put this phone feature on the desktop"

It makes me sad, as a mac fan. The hardware is getting worse. The decisions are getting dumber every time. I wont buy a laptop without a magsafe or similar connection, i have kids and animals, and the magsafe has saved a laptop more than once.. to remove something that was as core and identifiable a part of their computers was just a stupid move and served no purpose.

They don't listen to the industry or the consumers anymore, they stick their fingers in their ears and pretend to know best.

Jobs was hardheaded, but reasonable. Cook is trying to emulate the hardheadedness but fails to recognize the reasonability needed to balance that.

A plane so good it's still in production after 60 years bbc.com
503 points by nairteashop  3 days ago   186 comments top 22
phillc73 3 days ago 9 replies      
I'm surprised the article doesn't mention the Cessna 182, which also started production in 1956 and is still rolling out of the factory new today.

While the article extols the virtue of the 172's engine, the fact is that the vast majority of them are running very old designs with carburetors and on Avgas. Avgas still contains lead. These old engines are hugely inefficient and flown incorrectly prone to cracked cylinders. Newer models are fuel injected and there are also a few diesel conversions.

All Cessna single engine aircraft now have to undergo supplementary inspections (SIDS), at least in Australia and I think it is the same in the US. I've seen first hand the horrendous amount of corrosion which can hide in a 50 year old aeroplane and not be found until the wings are removed. These SIDS inspections have the potential to ground much of the older 152/172/182 fleet and render what was a $25,000 asset practically worthless. It will be uneconomical to repair in many cases.

The above has happened to me personally with a Cessna 182. In the end it was sold for scrap with only the engine and avionics retaining any value. I've also seen the costs of these inspections on a Cessna 210 exceed $20,000. It needed a whole new main wing spar amongst other things.

The point I am making is that these very old single engine light aircraft need very meticulous inspections now to ensure they are still safe to fly. I do believe there are probably quite a few seriously at risk aeroplanes still flying today, especially if they have been left outside in coastal areas for any length of time.

I used to own a light aeroplane maintenance business.

sunflowerfly 3 days ago 8 replies      
I have flown one, and while good, it is not "that good". What happened is the FAA rules are so stringent to create a new aircraft, that the subsequent cost was so high, that few new small plane designs make ROI sense. This has created a case where the strict rules in the name of safety have actually caused a reduction in safety. This plane was originally designed on slide rules. Today we could create more optimized designs in almost every metric, including safety, but no company can afford to do so. The FAA is supposed to change these rules soon.
FabHK 3 days ago 2 replies      
The article mentions the longest non-stop flight briefly - that was quite a story: the two guys flew for 2 months non-stop.

Here [1] are some pictures, incl. of the refuelling. Below some tidbits I found interesting or amusing:

- after take-off, they did a low pass to let a chase car paint white stripes on the tires, so that they could not cheat undetected (by landing somewhere and taking a break).

- they refuelled about twice a day

> I once asked Johns widow if they handed down the waste during refueling runs. She said, No. Thats why its so green around Blythe.

> Some time after the flight, Cook was asked by a reporter if he would ever try to replicate the stunt, to which he replied: Next time I feel in the mood to fly endurance, Im going to lock myself in a garbage can with the vacuum cleaner running, and have Bob serve me T-bone steaks chopped up in a thermos bottle. That is, until my psychiatrist opens for business in the morning.

[1] https://disciplesofflight.com/flight-endurance/

tadruj 3 days ago 2 replies      
They are still producing them because it's cheap due to "grandfathering" laws.

"grandfathering" means if you'd design an airplane like 172 today they wouldn't meet the safety standards and you wouldn't be able to produce them, but since they were designed back in the days, if they don't change the design, they can still produce them.


I fly 172 regularly. It's a safe plane, but you have to know quite a bit about engine and how it works to be really safe up in the sky. I had engine failure on take-off with extremely well maintained plane. Starting the engine is a pain in the ass for most civilians who don't understand 4 stroke engines.

Cessna 172 uses about 10 gallons of fuel per hour. That's quite a lot. I think in 2017 there's better options out there.

csours 3 days ago 3 replies      
In the last 60 years, automobile engines have improved many times: for instance, the 4.4 liter 8 cylinder engine powering the 1954 Pontiac Chieftain[1] produced as much horsepower and torque as the 1.4 liter turbo in my 2013 Chevy Sonic[2] - and it's not even a particularly good or modern engine. (Disclaimer: I work for GM, I'm using these models because I'm familiar with them)

Has the engine in the 172 been improved in that time period? The article says it has not, but I can't imagine using 60 year old tech like that.

I understand that it is "proven" tech, but that would be like saying that punch-cards are "proven" tech nowadays.

1. https://en.wikipedia.org/wiki/Pontiac_Chieftain#First_Genera...

2. https://en.wikipedia.org/wiki/GM_Family_0_engine#Generation_...

beloch 3 days ago 0 replies      
While it's "only" 52 years old, another plane still in production (albeit under a new company) that's worthy of discussion is the de Havilland Canada (now Viking) DHC-6 Twin Otter[1].

The Twin Otter isn't just nice to fly, cheap, or ubiquitous. It isn't just a mainstay bush plane everywhere. It's still the best plane in existence for certain extreme requirements.

Many different planes can go deep into Antarctica during the summer, but when somebody gets sick enough to warrant evacuation from Amundsen-Scott South Pole Station in Antarctica in the middle of winter, as happened in 2016[2], the DHC-6 is still the best plane for the job. In fact, two DHC-6's went because the only plane capable of performing search and rescue for the first was another DHC-6. There simply aren't other planes out there that can land on a short, frozen runway in the dark of an Antarctic Winter when temperatures are so cold that fuel turns into jelly[3].

Viking has been modernizing many aspects of the Twin Otter, but they're still making Twin Otters. The Twin Otter is 52 years old and still does things no other plane can.




batoure 3 days ago 1 reply      
I think this article is a little miss leading. I did most of my initial flight training in a 172 I bought with a friend. We had it parked in Tucson near a company that was building custom planes and blazing the trail in glass cockpit design. After a couple years I had built friendships with a number of people there and built the following picture of the industry:

Innovation in private aviation is so small that it's dead, this isn't because someone owns the market but because FAA certification of new technologies is a 15 year not like 20 to 25 year process.

Why? You ask?

Starting in the 70s and into the early 80s there were a number of high profile crashes of private planes. Think Woz, a number of these crashes were due to pilot error, but a number of them became civil lawsuits where the operational complexity of the aircraft was blamed. The FAA was called upon to develope stricter standards which put many private aviation companies out of business. Cessna survived but based on the high price of getting new tech certified which lowers competition there is a way lower incentive for them to change the design.

Many of the parts in the engine of my 172 were OE ford parts found on cars in the 60s but the FAA certified stamp meant we would have to buy the 300 dollar version.

TL;DR: the enduring success of the Cessna 150-180 is actually a tradgedy of blocked innovation and not something to be proud of.

clueless123 3 days ago 0 replies      
Wrong!The title should read : Regulations so strict, 60 years of advances in technology can't make it to the market place.

To see what we are missing, Just take a look at experimental aviation which is not as heavily regulated..

*I did most of my basic training on a 172 and I love them like I love a favorite old pair of shoes.

Animats 3 days ago 2 replies      
There are a lot of 1950s and 1960s aircraft designs still flying. That was when smart people went into aircraft design, and it was the most productive period in aircraft design history, as everything went jet-powered. The B-52, the B-737, the B-747, the SR-71, and the Concorde are all from that period. (So are a lot of duds, of interest only to aviation historians.)

Ben Rich, former head of the Lockheed Skunk Works, once remarked that he'd worked on 30-some aircraft in his career, but today's engineer will be lucky to work on one.

cyberferret 3 days ago 1 reply      
The Cessna 172/182 types are fine aircraft, but I really would have loved to have seen some more innovation over the years.

When I was a student pilot, we did our first handful of familiarisation and evaluation flights in a 172. The instrument panel looked like someone had cut holes in an ironing board and stuck dials in it. The seats were no more than a 2 piece metal bench that someone had stuck thin cushions to, and the seatbelts would have looked at home in a 1940's car.

Then we transitioned to the SOCATA TB-10 Tobago for the rest of our training. It was like switching from a Russian built car to a Lamborghini. The instrument panel was ergonomic, recessed for shade, and the engine instruments were actually canted to face the pilot. We had Recaro racing seats in the aircraft which made long navex's more bearable. Inertia reel seatbelts. Gull wing doors that helped cool the aircraft quicker after sitting on a hot tarmac all day. Throttle controls that looked like a jet fighter instead of pull knobs.

The European design was simply leagues ahead, and made the flying experience so much better. I am thinking a major reason for the longevity of the Cessna training line is more to do with cost for budget conscious training schools, rather than being a better aircraft than any other trainer.

jcutrell 3 days ago 0 replies      
My dad owns and maintains a 1962 Cessna 182.

For those keeping count, that's a 55-year-old bird. Flies like a dream. In fact, we flew it last night. Recently put in a brand new engine. We'll be upgrading avionics soon enough, too.

I got my pilot's license this year, and hope to continue the tradition of flying my family in the plane.

Take care of stuff and it can last a long time.

tim333 3 days ago 2 replies      
>The 172 was based on an earlier Cessna design called the 150. This looked very similar apart from the fact it was a taildragger

I think they've got their Cessnas in a muddle. The 140 was a taildragger. I learnt to fly in a 150 which definitely had a wheel at the front.

geff82 3 days ago 0 replies      
The 172 simply has some advantages even after all those years. First, it is known everywhere (also its quirks, which makes is safer). Second, the high wing makes it perfect for the young pilot to fly. And third, it has space! I, at 1.87 meter length, can sit comfortably in its back with some spare headroom remaining. I can't tell this of many other aircraft in a similar segment.
TheSpiceIsLife 3 days ago 2 replies      
One answer comes from the fact that the Cessna 172 is a high-wing monoplane meaning the wings sit high above the cockpit. This is very useful for student pilots because it gives them a better view of the ground and makes the aircraft much easier to land.

I had to think for a moment what a monoplane is. It's not a biplane.

Anyway, the high-wing design also causes the plane to fly level with regard to it's roll angle if you take your hands off the controls, due to the center of gravity being below the wings.

vanattab 3 days ago 0 replies      
Another great plane that has stood the test of time and is worth mentioning is the B-52. Over 60 years old and still being used extensively. One particular B-52 was piloted by a grandfather, father and finally son. What's more the B-52's are scheduled to keep flying until at least 2045 making a total lifecycle of 90 years!!
woodandsteel 3 days ago 1 reply      
I think part of the reason we don't see new designs is they wouldn't be that much better. Often a technology advances rapidly, then hits a plateau where future improvements are just modest.

Think of jet planes, where everything since the 707 has just been a modification. That is why 50's and 60's planes like the b-52 and the a-10 are still flying. Or space rockets, where we are still just duplicating performance from the 60's (but with SpaceX we finally have something new).

Piston planes advanced very rapidly starting with the Wright brothers, but then hit the plateau in the 50's, and the next step up, jets, is just too expensive for most private pilots. Yes, it is possible to produce better small piston planes, but the sales are too small to justify the needed investment. Maybe electric planes will finally get us something new and better.

OliverJones 2 days ago 0 replies      
Airplaneheads often gripe that mass-media stories always glorify airframes and ignore powerplants. This story deserves that gripe. The story is actually about Cessna and Lycoming.

The Skyhawk airframe makes the machine easy to fly and land.

The Lycoming engine makes unplanned landings very rare.

Both are very important!

It really is an amazing airplane. In really cold weather in a 40 knot headwind I've gotten negative groundspeed in stable flight.

It takes real work to stall the airframe, and it recovers immediately if you let go of the controls.

jcutrell 3 days ago 0 replies      
I really wish this could get locked in:


ryanmarsh 3 days ago 2 replies      
How long till someone starts turning these into (relatively) cheap drones bombers? This is a proven aircraft, add bomb bay doors and a rack and release mechanism for 120mm mortar rounds. Instant 3rd world long range bomber, perfect for the warlord with a dirt strip and a mechanic. Airpower for the cost of ~10 technicals.
tyingq 2 days ago 4 replies      
Any idea what the longest serving aircraft is? I believe there are still a small number of B52's in place, and they were rolled out in 1952. That's 65 years.
partycoder 3 days ago 0 replies      
flightgear (free/opensource flight simulator) has it.

To get it started, press the engine primer 3 times, put mixture all the way in, throttle to 20% (+ throttle = 9, - throttle = 3), and turn the key twice (type "}}"), start it (s key), remove the parking brake (shift+B key). Then start increasing the throttle and when the airspeed indicator shows about 50 knots, go up by pressing the down key to take off.

Do not try in an actual plane.

Rust's language ergonomics initiative rust-lang.org
501 points by aturon  4 days ago   287 comments top 16
modeless 4 days ago 11 replies      
Are there plans to do user studies? 10 minutes watching new users code in Rust will give you better ideas than 10 weeks thinking about the problem in your head.

I feel like there's a real lack of user testing in software development tools land. If you're developing software for unsophisticated users it's obvious that you should be doing user testing, but it's an often ignored fact that developers are users too! APIs, compilers, build tools, they could all be vastly improved with some user testing.

yarper 4 days ago 4 replies      
After writing Rust in production for a while, the biggest bugbear I have is the naming/file structure.

I end up a lot with this;

 src/main.rs src/combobulator/mod.rs src/combobulator/tests.rs src/tests.rs src/somethingelse/tests.rs src/somethingelse/mod.rs
Because I find tests in the same file a bit confusing. It's really easy with maven-style layouts to know that "only things in main/java or main/scala get compiled and go into the jar". "src/test/*" and "src/main/resources" are for me. The same thing applies for cargo.tomls and resources - there's not really a way to see what goes into the executable from the file structure.

But this isn't the biggest problem with having things called "mod.rs". That would be if I open 5 mod.rs's in a text editor with tabs, I have no idea what goes with what.

I know that tests should go under tests/, but that's specifically for integration tests. Integration tests are an order of magnitude less likely to get written imo, and if they are they'll probably get written as unit tests anyway.

If anyone has any top tips for how to structure larger Rust projects while separating unit tests into different files, please let me know!

ww520 4 days ago 4 replies      
Speaking of removing friction, there are three areas that have caused me grief when I wrote Rust code:

1. Error handling. The lack of built-in support for multi-error or error union in Result is painful in dealing with different types of error in a function. Support for Result<Value, Error1 | Error2 | Error3> would be helpful. Or may be support for easily converting one type of error to another. Now there's lots of boiler plate code to deal with error conversion. Error chaining would be nice, too.

2. Lack of stack trace when an error occurs. Now that stacktrace starts when panic!() is called, which is kind of late.

3. Better support for conversion between &str and String. Dealing with strings is so prevalent in programming that making it easier to work with the two types would be a huge boost to productivity.

Edit: another item

4. Support of partially applied function , i.e. bind a subset of arguments to the function pointer. Currently there's no way to bind the self argument to the Option/Result chaining calls. Basically the Option/Result chain (.and_then, .map, etc) only carries forward the value of Option/Result and nothing else. It would be nice put partially applied function in the chain. e.g. result.and_then(self.func1) where func1 has the self argument bounded. Or in more general form, result.and_then(func1("param1", param2, _)) where func1's first and second parameters have been bounded up front and the value of result will be passed in as the 3rd parameter.

killercup 4 days ago 2 replies      
I especially like this approach:

> Often, the heart of the matter is the question of what to make implicit. In the rest of this post, Ill present a basic framework for thinking about this question, and then apply that framework to three areas of Rust []

What's proposed here is a universally good way to think about what to make implicit. The proposed changes to Rust are just some applications of this.

twic 4 days ago 1 reply      
I'm really encouraged by this post. I ran into a situation somewhat related to the borrowing in match patterns this week [1], and whilst it's only a mild annoyance, it's lovely that it might get smoothened out. Today, i started using modules in anger, and was immediately mildly annoyed by the need to explicitly reference crates in my code, when they're already in my Cargo.toml, and to declare modules, when they're implied by my file structure, so i'm happy to see that that is on the radar too!

The file structure one makes me laugh, because one language that does implicitly create modules from file structure, in exactly the way Rust would need to, is Python, which is the one with the whole "explicit is better than implicit" deal!

[1] https://www.reddit.com/r/rust/comments/5whke7/deref_coercion...

CalChris 4 days ago 4 replies      
As I've said/posted this elsewhere, the Rust macro package is close to unusable. It makes easy stuff difficult and it doesn't exactly help with difficult stuff.

It would be interesting to compare the number of macros defined in the crates corpus divided by total line count and compare that with other languages. I do not think that I am alone in not using it. Yes, I use macros; I just don't program macros.

Obviously, Java has shown that you can survive without a macro pre-processor. That was even a point Gosling+Co made in a white paper I read way back in the day. But I do believe that if you are going to have a macro processor, it should be an expedient. Rust's macro processor is not expedient. It is its own impediment.

I'm used to using macros. I use them in C and I use them in assembly. These are both low level languages which Rust claims to be. Not being able to use Rust's macros in the style to which I've become accustomed is infuriating.

spraak 4 days ago 2 replies      
> Right now, such a signature would be accepted, but if you tried to use any of maps methods, youd get an error that K needs to be Hash and Eq, and have to go back and add those bounds. Thats an example of the compiler being pedantic in a way that can interrupt your flow, and doesnt really add anything; the fact that were using K as a hashmap key essentially forces some additional assumptions about the type. But the compiler is making us spell out those assumptions explicitly in the signature.

I feel this exact same way with Go. E.g.

 x := map[string]map[string]int{ "key": map[string]int{ "another": 10, }, }
Given that the outer type signature says that the `value` of the map should be a `map[string]int` it's sometimes quite annoying to specify that inner type over again

swuecho 4 days ago 0 replies      

but do not forget to document what is implicit. Otherwise, it is magic and make it more confusing. That is the impression of my last attempt to learn rust.

the_mitsuhiko 4 days ago 2 replies      
My biggest and probably only real frustrating with Rust is that modules and crates live in the same namespace. That makes stuff incredibly confusing to teach and read. I can otherwise live with the explicit extern/mod if needed.
zengid 3 days ago 1 reply      
A bit of inspiration can be gleaned from the work of Dr Stefik on Evidence-based language design [1][2].

[1] https://www.youtube.com/watch?v=uEFrE6cgVNY[2] http://dl.acm.org/citation.cfm?id=2534973

Tarean 4 days ago 1 reply      
The implied bound one reminded me of a very similar thing in haskell https://prime.haskell.org/wiki/NoDatatypeContexts . Basically

 data Hashable a => Set a = ...
is completely useless. It only forces you to add constraints to functions that, if necessary, would be required anyway.

Not to be confused with existential quantification

 data ExistentialSet a = forall a . Hashable a => ...
which carries a reference to the hash function in the instances, similar to trait objects in rust.

raverbashing 4 days ago 2 replies      
Great initiative

By its very nature, Rust is harder (than let's say Python or JS). It is compiled, there's not much runtime magic to rely on and low level is hard.

But thinking about this and trying to make it easier is important

hinkley 3 days ago 1 reply      
I'm an ergonomics nut and I've been looking to learn Rust. Any chance they're looking for a set of guinea pigs to report their experiences as new users, or are they mostly working on already-known issues?
raz32dust 4 days ago 1 reply      
I hope the ergonomics initiative takes it towards Java rather than Python. For all the hate it gets, Java's explicitness is a boon when maintaining large scale systems and preventing bugs.
trento 4 days ago 1 reply      
I was able to work with some rust developers at Hack Illinois recently. We started the 2017 Rust cookbook [0] with Brian Anderson.

[0]: https://github.com/brson/rust-cookbook

MrF3ynmann 4 days ago 6 replies      
I still can't get my head around rust. While all those features definitely make sense, I find it very confusing sometimes.

Is there something like rust for c++ programmers?

A $10K tiny house 3D-printed in 24 hours apis-cor.com
614 points by yurisagalov  2 days ago   246 comments top 43
milesf 2 days ago 4 replies      
I'm so surprised that people have not heard of the late R.G. LeTourneau. The man was a mechanical genius, and the father of modern earth moving machinery.

Here's a video of his Concrete House Machine from 1946: https://www.youtube.com/watch?v=BpWjyZO2lPU

This was 70 years ago! I read about them in his autobiography "Mover of Men and Mountains". He went with concrete because it was cooler in the summer, and warmer in the winter. Seems to me the "Not Invented Here" mentality has been around for a very long time.

saeranv 1 day ago 4 replies      
A lot of people are commenting (correctly) that any cost savings achieved here are trivial relative to the land cost. I think this is looking at this tech too narrowly though. There are many areas of architecture that can be revolutionized by cheap, accurate mass-customization.

One example would be energy. Right now heat transfer and the transmission of daylight through walls/windows can be theoretically optimized to reduce energy costs simply by customizing the geometry of the windows and walls relative to their orientation and local context. We don't do so in architecture because the costs of customizing the geometry is too expensive. Architecture has a history of attempting mass-customization since the 1960s[1], but it's always been limited by the technology and realities of integrating multiple subcontractors and consultants. Those problems still exist, but the industry is changing pretty fast and I feel automated form generation, simulation, optimization, and production technologies are converging to a point when this could be achieved.

[1] i.e John Habraken.

orless 2 days ago 3 replies      
I think this should be primarily interesting for buildings with special form which are hard or expensive to build with traditional methods. The price is not the selling point here, at least not in Russia. Russia has a lot of cheap labor from the southern ex-USSR countries like Tadjikistan. So $10k for 38qm is not really cheap. You can build a 150qm two-floor house for around $25k. Also $277 for the foundation seems a bit suspicious, should be much more expensive.

"the radius of curvature of the TV matches the house wall curvature" pretty much looks like a PR stunt for Samsung. I wonder if they've actually chosen curved walls to somehow justify curved TV.

ChuckMcM 2 days ago 8 replies      
I find the notion of 3D printing a house intriguing but it seems ultimately impractical. There are better ways to manufacture homes that can be componentized and transported for final assembly on site. One where I visited the factory was the BluHomes[1]. You can automate much of the construction of walls and wiring and finishing if you do it in sections.

[1] https://www.bluhomes.com/

Flammy 2 days ago 1 reply      
Discussion on reddit from 2 days ago: https://www.reddit.com/r/Futurology/comments/5xf7sf/a_russia...

Apparently the $10k includes electrical, windows, etc.

donald123 2 days ago 3 replies      
This is not new. China already has company that can 3D print a real two-story house. See this https://3dprint.com/138664/huashang-tengda-3d-print-house/

This article also mentioned another Chinese competitor can 3d-print 6-story apartment and a mansion.

djaychela 1 day ago 1 reply      
While this is clearly an impressive technical feat, I can't help thinking of the carbon footprint of creating a house out of concrete, which is notoriously un-environmentally friendly, and difficult to deal with when the structure is no longer required. Hopefully a more environmentally-sound material will be used. There's not a single mention on that page of the carbon footprint of the building method used, and I think it would be interesting to compare physically identical (as much as possible) buildings of different construction (3d printed like this, bricks and mortar, pre-fab wooden/osb panels, etc), particularly when the tag line of the article is:

"We Are Building the Future Today."

SnowingXIV 13 hours ago 0 replies      
I'm working on buying land some land and building a house right now. The prices for a typical traditional way of building a house aren't that astronomical. I looked into some prefab but for a split entry or two story it's pretty hard to bring my costs down much lower.

I wish I could get my land prepped and then print it out cheap or dropped off but haven't found anything that can get me to the price point I want with ~2500 sqft other than doing the typical way.

dbg31415 2 days ago 3 replies      
Cool tech, but...

Curved walls only look good from the outside. Design looks like something someone who has never built a house before drew up. Things like a sofa that can't face the TV screen, or ostensibly a bathroom that you have to walk through the bedroom to get to.

Also everything looks very narrow. I want to see them build a real house this way, something with full-sized doors (36"), full-sized washers and dryers, something with 2 levels, something with actual electrical outlets (I see none in the video or pictures).

I can slap together a shed without plumbing or electricity in a day. Boasting about the price, showing electrical appliances, but not including the electrical wiring... let's be kind and just call that some "optimistic marketing." I mean... It doesn't even look like they have a real foundation... And small point... if they are going for speed, why are they using a roller to apply paint? Spray would do a better job and cut the painting time more than 50%.

skookumchuck 2 days ago 7 replies      
Around here houses are still built on site, stick by stick. The only things built off site are the roof trusses.

I don't understand why all the walls, at least, aren't built in a warehouse and then trucked in. It could be built cheaper, more precise, with far less wastage. Holes for electrical, plumbing and HVAC could be already put in. Even windows can be pre-installed.

(I know that part of the problem is architects under-design houses, leaving it up to the contractors to figure out how to route electrical, plumbing and HVAC on-site.)

dahart 2 days ago 3 replies      
What is the outcome of housing this cheap? This is one more option in a growing list that includes inflatable concrete structures, and Tumbleweed houses, prefab cabins, etc etc.

Is this cheap enough to house people in third world countries. Probably not yet, but could it be on the way? And I'm sure construction costs aren't even the primary economic factor to solve...

Is it possible this trend could make vacation housing for affluent people disposable?

I love the idea of having a tiny house or even two I can move around, but I suspect, knowing me, that it wouldn't get used and would take more maintenance than I'm really truly interested in.

mschuster91 2 days ago 1 reply      
I have a slight feeling that this won't become very widespread on Earth - simply put, the thing is too small to be able to construct a house larger than a typical 1-family house.

However, taking such a thing and sending it up to Mars or the Moon, now there's a potential for "real" prefabricated housing.

theon144 1 day ago 1 reply      
Oh man, I'd sure like me some Samsung Nano Crystal Color Revolutionary Super Ultra High Definition TV and a Samsung refrigerator with No Frost system, a Samsung induction stove, Samsung dishwashing machine, Samsung electrical oven, and maybe even a Samsung microwave oven. Not to forget the innovative Samsung AddWash washing machine.

I wonder why that is.

loa_in_ 8 hours ago 0 replies      
Too bad that the article doesn't mention any electrical nor water/sewage installation. No wonder the appliances are depicted turned off.
bigbugbag 2 days ago 1 reply      
At first it seemed to me to be expensive for a little not really practical house to live in. Then I figured out this is a tech demo and PR stunt and not an attempt to make affordable housing for people to live in.

I wonder how this compare to a yurt or straw bale construction.

chjohasbrouck 2 days ago 2 replies      
I'd be surprised if this house meets state regulations for residential construction in any US state.

That'll be one of the biggest obstacles to 3D printed housing in the US. Even if you somehow get efficiency gains through 3D printing, it's going to require a different configuration depending on the regulatory environment, which varies by nation and state and county and city and topography, and changes every year.

These regulations affect every detail of the construction of your house, from the foundation to the window panes. Even details as innocuous as sink depth are regulated.

ryankupyn 2 days ago 0 replies      
This is really cool, but I'd love to know how durable a printed house is over the long term. Right now though, I think that there's a profitable niche for this sort of portable and low-labor-intensity construction in the defence and disaster-relief fields, where speed and cost are a higher priority than aesthetics.

I could easily see the US government (or rather, their contractors) using this technology when constructing bases overseas, especially in places in Afghanistan where workers don't just need to be paid, but fed, housed and transported at great expense.

rootsudo 2 days ago 1 reply      

Concrete is also amazing. I can't believe how wood and stick frame took over traditional housing methods.

Keverw 2 days ago 0 replies      
Oh wow, pretty neat and surprised it took just 24 hours. For some reason it reminds me a bit of the Monsanto House of the Future at Disneyland. I wasn't even born yet when they took it down, but randomly found videos of it on YouTube once. But it was a whole house made of plastic. Probably molded I'd guess since they didn't have 3D printers back then.


sandworm101 2 days ago 1 reply      
Carpenters roofers painters and plumbers are safe. This "printed house" seems to have required lots of hands. It is also so small that any reasonable team of humans could have built it in as short a time.

A better approach would be to have the robot print the concrete forms, allowing for humans to erect and fill them on the site. That might actually save on manpower.

kriro 1 day ago 0 replies      
"""The construction cost of the printed house amounted to $10134, which is approximately $275 per square meter, taking in account that partners have provided the highest quality materials"""

Due to the wording of the sentence I'm not sure if the material is included in the calculation or not. But assuming prices going down and technology improving a building like that for 10-20k in 24h is an interesting proposition for on demand housing (even if it is just destroyed afterwards).Embedded sponsoring by Samsung aside I think the wall printed to match the curvature of the TV is an interesting example. This could be interesting for events/marketing booths etc.

Edit: This could also be very interesting for Hollywood for set building :)

Spooky23 2 days ago 1 reply      
Looks cheap to build, impossible to modify or fix.
ensiferum 1 day ago 0 replies      
Anyone read the short science fiction (soon to be non-fiction) story Manna.

Is this the prototype of cheap "terrafoam" housing for the poor ppl displaced from the society? ;)

rodionos 1 day ago 0 replies      
Awesome! Finally, Flintstone on HWY 280 will have some competition http://www.flintstonehouse280.com/

I can see artsy architects thinking up a whole new range of designs that escape the constraints imposed by established manufacturing and construction practices. Probably not as much not for permanent habitat, but for a garage, or a playhouse for kids.

jmspring 2 days ago 0 replies      
Tiny houses are great and all, but most articles, tv shows, etc two basic issues:

- land- hookups

Had this discussion today with someone enthused about a tiny home community until the land use rental fee came into play.

orless 2 days ago 1 reply      
amelius 1 day ago 0 replies      
Related: This House Costs Just $20,000But Its Nicer Than Yours [1]

[1] https://www.fastcoexist.com/3056129/this-house-costs-just-20...

wiz21c 1 day ago 1 reply      
Given the fact that climate change is the big deal, how much energy did it take to actually build such a house versus the energy needed by a regular brick & mortar & human workers house ?
nikolay 1 day ago 0 replies      
You don't have to print a house - it better be built from Lego-like blocks.
willyt 1 day ago 0 replies      
Other people have mentioned the high embodied energy of concrete but there is also a problem with insulating this structure. I hope the PIR wasn't just injected into the cavity because that is a recipe for disaster. The zigzag cross bracing you can see will be a massive cold bridge which will cause cold spots that will develop mould patches inside. Concrete is not breathable (it is mostly impermeable to water vapour) but hairline cracks develop which make it capillary active and will allow water to be wicked into the interstitial space containing the insulation and then through to the inner face to cause damp problems. A standard concrete wall needs to be a minimum of 300mm of solid concrete to remove the chance that a localised crack will form right through the wall. Usually when concrete is used it is purely for the floors and structural frame for this reason. You do see concrete clad structures but these are typically decorative panels with a capillary break behind, or they are damp and leaky old 60's buildings. Paint systems are not the solution as they tend to fail within 3 - 10 years and then they just trap even more moisture in the construction. Conventional houses get around this problem with a capillary break, basically just an air gap between the outer wall and the internal insulation which is on a block or in between timber structure. The alternative, which is well understood in central Europe, is to use breathable materials such as calcium silicate blocks and wood fibre insulation.

It looks like it forms a structure that only works in compression because of the way it is laid down. Even if it is glass reinforced concrete I don't think there could be a good structural interface between the printed layers. This is why it doesn't print the roof and they don't show how they dealt with the lintels over the doors and windows. These must have been installed manually. If the window openings had been 3d printed then they would have needed gothic arches to get around the 45 degree angle corbeling problem which is inherent in compression structures and the roof would have looked like a gothic vault for the same reason.

Another thing that is needed to make this into a dwelling that won't go mouldy, is some kind of vapour control layer on the inner concrete face. You then need to wire the place up, but there is the problem of how to hide the wires internally. You don't want electrical cables to go through the vapour control layer where possible and you definitely don't want any cable junctions where there is a risk of interstitial condensation making the electrics 'go fizzy'. Did they have to manually install dry lining over the cables? Are the internal faces rendered over conduit and then plastered?

I think their costs are suspicious as well, I don't know about Russia but in europe $227 would barely cover one day of time for 1 groundworks operative let alone machine hire and materials for foundations. A recent project we worked on had 20,000 installed cost just for the gravel fill for the groundworks for 2 small houses.

As an Architect, I think CNC machined cross laminated timber panels are more interesting because the embodied energy characteristics are much better and they are capable of forming diaphragm structures that experience tension such as floor and roof planes. In general flat vertical surfaces are not going away any time soon, because they are ABI compatible with Furniture v1.0.

alexro 2 days ago 0 replies      
Print the walls and let this guy to do the resthttps://www.youtube.com/watch?v=J_Uoq6JYKbw
london888 1 day ago 0 replies      
Great but I dont think it's the walls and roof that take time usually. It's the planning consent, foundations, utilities and internals.
pascalxus 1 day ago 0 replies      
Finally some real innovation! This is awesome.

Now, let's get this over here in CA - I know I'm just dreaming.

smarx007 2 days ago 0 replies      
jedberg 2 days ago 2 replies      
When people say, "All the manufacturing has gone to China!" show them this video, and say, "No, they've gone to robots".
spraak 2 days ago 1 reply      
I wonder about 3d printing with cob like materials.
pmiller2 2 days ago 0 replies      
I'd be interested to see how the process scales up to larger structures. That house is smaller than my apartment!
hossbeast 2 days ago 0 replies      
We need Elon Musk to ship a few of these things to Mars
jlebrech 1 day ago 0 replies      
we should print on the moon using robots with no human aid.

this is what elon musk must send, not just an orbital tour.

vadym909 2 days ago 0 replies      
Great to build a large and long wall /s
jlebrech 1 day ago 0 replies      
the housing crisis needs vertical buildings, I hope 3d printing can got in that direction soon.
m3kw9 1 day ago 0 replies      
A house without plumbing
djrobstep 2 days ago 5 replies      
What's the point of cheap housing if the landowning rentier class keeps extorting everybody through higher and higher rents?

The only way to get actually affordable housing is to tax land so hard that it stops being an investment at all and becomes merely a commodity, like it should be.

A deep dive into why Wi-Fi kind of sucks arstechnica.com
526 points by nikbackm  3 days ago   235 comments top 27
willidiots 3 days ago 22 replies      
I build public Wi-Fi networks, you've probably used one of them. AMA if you have Wi-Fi questions.

To say it "sucks" is a bit harsh. It's delivering multiple hundreds of Mbps to you via an unlicensed contention-based medium. The air interface is like sending packets over a noisy Ethernet hub; it's impressive it works as well as it does. That said, this article's a good primer on some of the protocol's fundamental challenges.

In the coming years we'll hear more about 802.11ax, which is thankfully focused on efficiency vs. raw numbers, but likely won't be ratified until 2019/2020.

dom0 3 days ago 1 reply      
There is a German tech saying that goes "Wer Funk kennt, nimmt Kabel" ("Those who know wireless, use wires").
olivierlacan 3 days ago 1 reply      
This article should be read by everyone working in open space offices across the world expecting to get decent speed and reliability out of Wi-Fi with more than a dozen people in a single room.

An open space office (or any office) for a company that depends on the Internet for any of its work without Gigabit Ethernet cables sticking out of every workstation is pretty damn foolish.

coleca 3 days ago 1 reply      
I once had the opportunity to listen to an extremely talented WiFi expert from Aruba Networks explain on a whiteboard to a rapt audience of infrastructure engineers how this works. And how adding multiple SSIDs is a contributing factor to this problem.

The most fascinating piece was what he called the "butter factor". The closer a substance is in consistency to butter, the more it will absorb WiFi signals. Aruba had one heck of a challenge installing a WiFi network in the Land O'Lakes manufacturing facilities. They had to use directional antennae mounted mounted at eye level down each aisle of the factory.

bsenftner 3 days ago 5 replies      
My day to day happiness quotient shot up the day I ditched my wireless wifi and ran Ethernet cables through my spaces. I'd realized I was constantly noticing wifi issues, and an apple mbp I use for email seems to drop wifi every half hour. But no longer, with them all wired up! Realizing there are USB3 to Ethernet gadgets really hits home how our modern technology is market-driven-dumb and anti-consumer: all modern laptops don't even have Ethernet ports anymore!
nerdbaggy 3 days ago 3 replies      
As somebody who deploys high density wifi for a living I can agree that WiFi sucks. 5Ghz is already super crowded and it's only getting worse. The new LTE over 5Ghz is going to kill 5Ghz I think once it becomes deployed. Some of the cameras in arena run on 80mhz 5Ghz frequency that hop around and can't be channel planned. They are the worst.
r1ch 3 days ago 1 reply      
Lots of legacy tech is also one of the reasons why Wi-Fi sucks. If you do one thing today, disable 802.11b on your router. 802.11b beacons alone can completely jam a 2.4 GHz channel in dense deployments, exasperated by those ISPs that broadcast their own SSIDs from your home router.

I wrote a more in depth blog about this at https://r1ch.net/blog/wifi-beacon-pollution

anf 3 days ago 0 replies      
Sounds like "everything is amazing and nobody is happy" syndrome [1] :-)

1. https://www.youtube.com/watch?v=dgEvjW1Pq4I

employee8000 3 days ago 1 reply      
If it's a mess then as a consumer they're doing a great job hiding it from me. I have 20 devices connected to my wifi and everything is running particularly smoothly. I did get frustrated at my 802.11n 4 years ago and dropped $250 for the best ac wifi router on the market and have had no complaints since then. Sure it could be more efficient from a technical standpoint but that's not something I'm interested in and I'm happy at how seamless things are for me right now.
d33 3 days ago 1 reply      
It also sucks from the security point of view even the problems could in most cases be fixed with solutions known to the current state of cryptography:


searchfaster 3 days ago 0 replies      
Many poeple also don't understand, how easy it is to frame a deauth packet and disconnect clients from APs. Hotels are known to use this to kick you out of using your own personal hotspot on phones. Thankfully 802.11w PMF solves this and FCC has started imposing fines on hotels doing this.
sniglom 2 days ago 0 replies      
What's going on in this article? Is it the author speaking about having bad hardware and configuration?

> In real life, if you had your devices close enough to each other and to the access point, about the best you could reasonably expect [with 802.11b] was 1 Mbpsabout 125 KB/sec.

I used 802.11b a lot. In a non crowded situation reaching ~5.5mbit was not a problem at all. I remember seeing transfer speeds of about 700KB/s.

Why the author ignores the theoretical top speed which is something around ~60% of 11mbit is beyond me.

Then the author continues with the same thing again;

>your best case scenario [with 802.11g] tended to be about a tenth of that5 Mbps or so

This again is not true. In a non crowded situation I had no issues reaching 2-3MB/s, which is closer to the theoretical limits of 802.11g after factoring in some signal loss.

Surely, today when everybody is having wifi you would probably not reach 700KB/s on 802.11b or 3MB/s on 802.11g, but back when it began it was actually feasible.

Tempest1981 3 days ago 1 reply      
If anyone is involved in the standards committees -- please try to make the naming more user-friendly. My non-techie friends are totally confused by A, AC, AX, B, G, N, WiMax.

No need to reveal the inner workings of the standards committees to the public. Simple numbering would help.

scurvy 3 days ago 1 reply      
The author kinda bungles his analogy/explanation of collision avoidance and detection. Wireless networks don't use CSMA/CD. They use CSMA/CA. There's a huge difference, and it's one big reason why wireless throughput won't ever come close to PHY speed.

Wired ethernet uses CSMA/CD and it's one of the reasons it won the LAN networking wars of the 80's and 90's.

rconti 3 days ago 0 replies      
adjusts Meraki APs to auto power, 40mhz channel width on 5ghz

I was dismayed at how terrible range was in my new, Silicon Valley-sized house (~1100sqft). Even with a Meraki AP at the front wall, it would work line of sight about 30 feet and start having issues as soon as I stepped behind a wall.

Having a couple APs has mostly solved my issues, but even so, it feels like overkill for such a tiny house. But the neighbors on either side are pretty close, and I see a lot of interference. Worked a case with Meraki support for a long time, and that seems to be the real problem.

I didn't realize I couldn't "shout over" my neighbors though, so I had signal strength set to max.

Back when I was troubleshooting, I tried everything. 5ghz only. 2.4ghz only. Tweaking channels manually. Tweaking everything manually. The funny thing was, nothing helped.. but when I set things back to auto (Except max power), it all got better. Every incremental change I made caused slightly worse performance, but not enough so to notice. Going back to auto fixed it all.

Hoping auto power helps as well.

matwood 3 days ago 1 reply      
I suggest reading up on how WIFI works and some of the problems like hidden nodes [1]. Sometimes I'm amazed WIFI works at all.

[1] https://en.wikipedia.org/wiki/Hidden_node_problem

ksec 2 days ago 0 replies      
Is there any reason why we cant use LTE as a standard for Wi-Fi use case? So we have in house LTE Router with Wired connection, and when you are out of range you are still LTE with your carriers. This is not LTE-U in Rel12 or LAA in Rel 13, which both require a functioning LTE connection as Anchor point.

WiFi used to be good in 3G / WCDMA days, but i think the appearance of LTE with constant innovation and advance in both carrier and phone maker has made the LTE experience so much better. And it will only get better with LTE Advance Pro and 5G.

draw_down 3 days ago 0 replies      
I hate wires. But the only thing worse than them is wireless technologies.
apenwarr 3 days ago 0 replies      
Some even more in depth slides about why wifi doesn't always perform: http://apenwarr.ca/diary/wifi-data-apenwarr-201602.pdf
xbryanx 3 days ago 1 reply      
What software tools do people use (OS X, Linux, Windows) to test out and debug wifi connections?
hchenji 3 days ago 1 reply      
The article misses out on explaining WHY we cannot get the promised bitrates. The answer is Shannon's limit on channel capacity, which mandates that you pay in either bandwidth or high power or low noise (SNR) to get higher capacity. Now these WiFi devices have internal rate adaptation algorithms that choose a particular modulation and coding scheme (MCS) index based on the measured SNR. A higher MCS index means better higher bits per symbol (modulation) and lower overhead code rates, and more antennas (spatial streams), which is how you get the xx Gbps bandwidth advertised on the box. List of MCS indices: http://mcsindex.com/

In today's devices, interference is considered as noise, which means that SNR simply drops to a point where the higher MCS indices are not chosen at all. So, even though the device is capable of the advertised XX Gbps bitrate, the SNR isn't high enough to switch to those higher rates.

davidgerard 3 days ago 4 replies      
My loved one went for a junior sysadmin job. They'd decided to remove all the wiring and use wifi for everything because of just the sorta hype mentioned here. Loved one pulled out a Palm Tungsten C and proceeded to crack all their WEP passwords there in the interview. Got the job too ... and the task of putting quite a bit of wiring back.
amelius 3 days ago 1 reply      
The article doesn't seem to speak about directivity. Sending a narrow, directed beam of information could reduce contention issues, and reduce power requirements. But you'd need a more advanced antenna (probably an array), more advanced signal processing, and smarter software.
pedalpete 3 days ago 0 replies      
This statement is points to why it doesn't suck for most people "In practice, it wasn't a whole lot better than dial-up Internetin speed or reliability."

At the time of adoption, many people were on dial-up or just moving to slightly faster internet speeds and they were accessing the internet via wifi, so they didn't notice a drop in performance. Wifi speed increased along with access to faster internet.

Is it as fast as it can possibly be? No, but it's like having a Ferrari in highway traffic. Most people can't take advantage of the technical capabilities of anything that would be considered better.

brokenmasonjars 3 days ago 0 replies      
My trading terminal I have directly wired in as just issues with wifi can become costly in the middle of a trade. That said, my leisure laptop and all I'm totally fine with wifi. Then I again the leisure laptop really doesn't get the heavy use such as gaming. Some lectures on youtube is probably the most test it gets. My phone (iphone 6), despite being relatively new has always been terrible with wifi which I always found weird.
tdy721 3 days ago 2 replies      
I think it's getting Bits and Bytes a little mixed up. You can expect about 1/10th speed?

Most file transfer dialogs I've seen ("real world"?) display transfer rate in Bytes. Advertisers use bits, that little marketing wave can actually explain the speed drop to 1/8th of the "advertised" rate.

mentat 3 days ago 0 replies      
He's testing performance with cheap laptops and blaming the adapter? Pushing data isn't free. If you have a bad CPU or memory architecture or even bad drivers, you're not going to get network rated speeds. This has been true for wired as well since the beginning of ethernet.
Electronic meters false readings up to six times higher than actual consumption sciencebulletin.org
420 points by rwbhn  3 days ago   152 comments top 22
dheera 3 days ago 12 replies      
I was paying upwards of $150 a month for heating in a tiny <400sqft studio in Boston at one point. $200 on one month. There was definitely something wrong with the meter readings, and pretty sure I was getting charged for someone else's electricity considering how little I was at home with the heater on. The utility company refused to investigate and would only threaten to send in debt collectors if I didn't pay up. My property manager only pointed me to the utility company. I tried complaining to DPU, but they didn't respond to e-mails, and I didn't have time for phone calls or mediating this mess, in general.

As much as I wanted to fight, as sad as it sounds my time was worth more than the money I'd get back by spending time on the phone arguing and escalating the issue at critical hours of the day. :-/

Moved to California, paying a much more reasonable utility bill, and glad I didn't have to deal with that again.

If there were a "deal-with-humans-as-a-service" business that took a 20% cut of any money I could get back in situations like this, I'd totally pay for it. I once had to argue for 2 hours on the phone with T-mobile about $250 in excess charges on my phone bill, and while got it all back after escalating it to a manager, I'd totally pay $50 for someone to deal with the 2 hours of phone calls for me.

Animats 2 days ago 5 replies      
Check the actual papers.[1][2] For single-phase meters, which covers most residential uses, this is a non-problem: "No deviation beyond the specification could be observed; no influence of interference due to interfering or distorted voltage, and no influence caused by interfering currents were observed." All the problems were seen with 3-phase meters, usually found only in industrial and commercial environments.

Figure 3 of [1] is puzzling. They claim to be testing a 3-phase meter, but the circuit shown is single-phase. Are they testing 3-phase meters with only one phase connected? That's way out of balance; 3-phase systems normally have at least roughly equal loads on each phase. While a 3-phase meter with an wildly asymmetrical load ought to measure accurately, that's not a normal condition.

[1] http://doc.utwente.nl/102016/1/Runaway_energy_Meters.pdf[2] http://ieeexplore.ieee.org/document/7866234/

jwr 2 days ago 3 replies      
I'll recommend the hacker solution: first, buy and install your own meter, right after the power company one. It isn't expensive. Have it done by an electrician. I got mine installed in my fuse box, a small 3-phase DIN rail meter (I'm in Poland and pretty much all modern hookups are 3-phase).

This gives you a way to at least compare the official meter readings with an independent source. In my case, it showed no correlation whatsoever, and it turned out that the power company swapped the meter numbers between me and my neighbor.

Second, most meters have LEDs that flash a number of times per kWh consumed. It isn't difficult to build a device that measures the time between those pulses and gives you energy monitoring. I built one and had it running for a while. It's an eye-opening experience, you'd be surprised how much energy some devices consume, and also how significant a constant power draw can be.

zkms 3 days ago 3 replies      
OK so serious question -- utility companies tend to have high-accuracy, company-operated/maintained meters (that effectively measure the total use of a bunch of customers).

They're meant for like, identifying non-malicious/technical losses but it's possible to use them to identify customers who are bypassing/tampering with their meters, as:

total use (as measured by the trusted meter) must be equal to accuracy(customer ID) * reported use(customer ID)

and assuming there aren't many cheating customers, and enough measurements (smartmeters make this easier, 15 minute slices is a lot better than 1 or 2 month slices), and assuming enough variation/independence between customers, it shouldn't be that hard to estimate the accuracy of each reporting meter.

I'm wondering if this sort of balance check (i don't know the proper terminology, this isn't my field of expertise, if you know more, i would love to be corrected!) would have been sufficient to detect the sort of misreporting mentioned in this article.

Full text of the actual published article is here btw: http://ieeexplore.ieee.org.sci-hub.bz/document/7866234/?relo...

mdip 3 days ago 2 replies      
I noticed my electric bill went up a bit when I was moved over to the electronic meter ... not so much that I thought anything of it other than that gee, I use a lot of power and now it seems like I'm using a little more.

The problem here is that the discrepancy is in the favor of the power company. As long as they're making more money and especially because the "Accredited Testing Agency"'s own tests won't detect the fault, it's unlikely anything will be done about it unless there's an obscene amount of media attention paid to it and the regulators step in requiring a correction.

... Though there is one way that might happen more quickly. I wonder if there's a converse effect? Are there methods for consuming electricity on these meters that they similarly fail with but woefully under report the amount of electricity being used? If something like that was discovered and publicized enough for people to take advantage of it, I'd imagine the problem would get fixed on the short order. I'm pretty weak on electrical engineering, so I'm really thinking of this from the perspective of "Hey, that same flaw that allows an attacker to exploit my phone also allows me to gain root and unlock it!"

averagewall 3 days ago 2 replies      
A bit off topic but I once had a water meter that was overcharging by perhaps around 100%. I worked out that it was because it didn't have a one-way valve. When there was air trapped in the pipes, pressure fluctuations would cause water would move back and forth through the meter, racking up the bill with no net flow.

People should test their own meters which isn't that hard if you're careful and know the basic concepts of thermostats and power ratings.

ComputerGuru 3 days ago 1 reply      
Most installations of electronic meters have been done in the past two to three years. It's easy enough to see if readings skyrocketed after the installation (mine didn't).

(I'm presuming electronic == smart)

roywiggins 3 days ago 3 replies      
I've heard of free energy scammers using this to trick people, you muck with the waveform so the meter reads wrong, and it looks like you have invented a box that saves energy.
ars 3 days ago 3 replies      
If a Rogowski Coil and Hall Effect sensors both give incorrect results - what did the researches use to measure the meters?
peterclary 1 day ago 0 replies      
Whatever meter you get installed, make sure they take the initial reading properly! Obvious for mechanic meters, but it turns out that some Electronic meters are like that too.

In the UK, about 6 or 7 years ago we bought an old house and upgraded the wiring, fuseboard, etc. The Electricity company came and replaced the old meter with a newer Electronic (but not Smart) meter.

Unfortunately the guy they sent to install it didn't record the initial readings. That meant that as far as the company was concerned it had been installed at zero, when in fact it was way past that.

We weren't able to move in immediately, and in the meantime there was very little electricity used - one electric heater on low, occasional lights on/off when we visited. That kind of thing. So you can imagine our horror when the first bill was for thousands of pounds.

Fortunately when we rang up to complain we got somebody at the call center who immediately realised what must have happened and sorted it out. We had to agree an estimated usage, and it's still possible we overpaid for what we actually used, but at least it wasn't thousands.

deevolution 2 days ago 0 replies      
I live in a newly renovated apartment with electric meters in Brooklyn and i have led lights and have experienced similar excesive charges... At one point my bill for my 1 bedroom apt was upwards of $300. Totally absurd.
kiliantics 1 day ago 0 replies      
Take home message from this thread seems to be never to trust your utility company to charge you fairly. But the alternative seems to be to figure out the problem yourself (or hire someone to do so), saving the company money on what should be its own expenses.

I got a $400 gas bill yesterday that needs investigating and I have little knowhow or time to do so...

upen 3 days ago 1 reply      
So this is why my electricity bill is so high
rodionos 2 days ago 3 replies      
It would be in consumers' best interest to be able to access meter readings with at least hourly granularity, just like wireless carriers provide call logs with up-to-the-minute accuracy.

This way one can detect anomalies in resource usage based on raw data, unless the error is constant.

djhworld 2 days ago 0 replies      
Coincidentally this appeared on BBC news today http://www.bbc.co.uk/news/uk-39169313
phkahler 2 days ago 0 replies      
If waveform can cause the meter to read high usage... One could make something to correct it, and with that capability one could push power back to the grid with a distorted waveform at a higher rate ;-)
known 3 days ago 0 replies      
http://www.securemeters.com/index.php/global/ were selling high quality elecronic meters for many years
dbg31415 2 days ago 1 reply      
So how would anyone know if their meter was broken?
awful 3 days ago 0 replies      
somewhat related, cant comment on cost yet, but house has code-std lighted stairwell switches; when used with fluor. bulb noticed in dark it flickered continuously. appears that neons in switch leak current through bulb all the time, but not enough to start up. a lot of power? yet to be determined, but neons are dead short and fluor. bulb startup is high current so probably high leakage all the time and odd waveform presented to meter.
savrajsingh 2 days ago 0 replies      
ezoe 3 days ago 0 replies      
It should be called the Dumb Mater.
BrailleHunting 3 days ago 1 reply      
A SilverSpring Networks meter were installed on our house in N. Cal around 2011.

Perhaps there should be some certification requirements like scales for trade?

Apples Devices Lose Luster in American Classrooms nytimes.com
422 points by 2arrs2ells  5 days ago   319 comments top 4
ux-app 5 days ago 19 replies      
I'm a HS teacher. I was so happy when my school finally phased out iPads as the designated device for our year 7 students. It's such a rubbish device for content creation. The touchscreen is a POS for anything other than web browsing and casual games. Teaching coding, image manipulation, file manipulation was utterly painful or impossible.

Typing on them is beyond painful and of the hundreds of students I taught, fewer than a dozen actually bought a physical keyboard. This meant that every task took 3x longer than necessary.

On top of this they are expensive. To anyone even remotely IT savvy it was clear from the get-go that this was going to be a failed experiment. Unfortunately education, like most other things follows the fashion of the time and everyone had to learn the hard way that a traditional computer is superior in every conceivable way.

Chromebooks are rubbish too, so I don't really see the move to them as a positive either.

DaiPlusPlus 5 days ago 4 replies      
I remember the huge push when the iPad came out and all the talk about how it would replace textbooks and exercise books.

It could only be because school administrators were won-over by the engaging user-experience - completely overlooking practicalities - back in 2010 even into 2014 iOS lacked decent enterprise-management tools that would enable staff to lock devices down as they're obvious distraction devices - things are made worse by Apple's decision to not have multiple user-profiles on the iPad and confusion with how Apple IDs work. I understand they've gone some way to address those issues - but other concerns still apply, such as the vision of a wide range of high-quality (and interactive, no-less!) iBooks to replace textbooks - simply hasn't happened due to massive cost of authoring even a single iBook. But the main blocker I feel is that staff (both teachers and school district IT folks) just don't want to have to manage them. I know an IT contractor who resigned his job at a private school after having to deal with setting up hundreds of iPad Mini devices for every student - he just simply hated the work involved.

As Apple is neglecting the desktop market, they're just as well neglecting the education market: remember when Apple had massive education market penetration in the late-1980s? Now there's not even an equivalent of the old eMac unless you count the now 3-years-old Mac Mini models.

decasteve 5 days ago 4 replies      
I replaced my kids' iPads because of the awful hunched posture it precipitates (as shown in the 2nd picture in the article). Also, after an initial enthusiasm for iMovie and Swift Playgrounds wore off, they gravitated towards the more distracting apps and games, youtube and netflix, etc.

Having them sit upright, with good posture, at a desktop computer puts them in a different frame of mind when it comes to what they do on the device. I also have more input on directing their attention to being interested in computing, playing different types of games, and programming (dabbling in python mostly). My hope is they get a broader picture of computing, which won't be inhibited by the handicaps of iOS.

secabeen 5 days ago 2 replies      
Finally. It was always so hard to see schools spend millions on expensive Apple tech when there was equivalent equipment available for half the price that met their needs equivalently well. We really don't need to be spending $400-500 per tablet to outfit a classroom with 1 device for every 3 kids, when each kid really needs their own.
WikiLeaks Releases Trove of Alleged C.I.A. Hacking Documents nytimes.com
401 points by t0dd  12 hours ago   220 comments top 16
dang 10 hours ago 0 replies      
spullara 12 hours ago 13 replies      
This headline is extremely dangerous. The phone itself was owned. No encryption was harmed by capturing the keystrokes and audio before it reaches the application. NYTimes should be ashamed of themselves for basically lying about the nature of the hacks.
anigbrowl 10 hours ago 0 replies      
And people wonder why I am only lukewarm about encryption and opsec. I use both for myself, but I've given up evangelizing other people years ago because (as I've said here on HN many times):

For regular people, the effort of encrypting things is simply not worth it because they're powerless against a really determined attacker. It's rational to protect against casual attacks from spammers and scammers, but protecting oneself against state-level attackers is futile unless you make a full-time job out of it.

Someone usually pipes up at this point saying 'we need to limit the powers of the state', like some sternly-worded law is going to undo the existence of the technology or take away the vast economic and political incentives to deploy it. Get real folks, technology doesn't get un-invented, and powerful organizations are just like powerful organisms; they're opportunist, they maximize their own chances of survival, and when they do collapse the resulting power vacuum is filled as rapidly as any other vacuum would be. One can certainly seek to govern the behavior of a state or state organ, but attempting to limit its technical ability is naive, for the same reason that you'd be naive to try to fix police brutality by legislating about the design parameters of police batons.

jMyles 11 hours ago 0 replies      
> WikiLeaks, which has sometimes been accused of recklessly leaking information that could do harm

Nice passive voice there, NYT.

amckinlay 11 hours ago 1 reply      
We really need Qualcomm and others to document their hardware interfaces for modems, baseboards, and SoCs so that open firmware and drivers can be developed for these devices.
idlewords 11 hours ago 1 reply      
This headline is false and misleading, and does not reflect the headline on the article (WikiLeaks Releases Trove of Alleged C.I.A. Hacking Documents)
uladzislau 11 hours ago 1 reply      
You should consider the assumption that your security IS compromised at any given point in time (bypassed or whatever) then you could foresee and prevent some worst case scenarios which usually come from hubris nonetheless ("hey, our app is 100% secure and tested by the top security experts - not like other apps on the market").
upofadown 10 hours ago 0 replies      
> According to the statement from WikiLeaks, government hackers can penetrate Android phones and collect audio and message traffic before encryption is applied.

This a perfectly useless bit of information in that it says nothing about how this penetration could occur. Pretty much anything can be cracked with a trojan. Something like a currently valid remove exploit would be a much bigger deal.

I could say that all the secure apps are broken because I can stand behind you and look over your shoulder while listening to anything you might say.

misterbowfinger 12 hours ago 6 replies      

 According to the statement from WikiLeaks, government hackers can penetrate Android phones and collect audio and message traffic before encryption is applied.
How is that possible? Isn't the data encrypted before it's sent over the wire?

libertymcateer 12 hours ago 9 replies      
Edit: deleted, for very valid criticism. Next time I won't post in a rush during work hours.
bitmapbrother 11 hours ago 0 replies      
CIA Android Exploits


As you can see they pretty much all reference very old versions of Android (v4) and Chrome.

icodestuff 9 hours ago 0 replies      
Given the other revelations of the last few weeks, I have to wonder if these exploits are getting installed on every phone that the CBP demands people unlock. Seems like the obvious thing to do. Best not to trust your phone or any software on it at least without a factory reset, and preferably a software update, after it's been in CBP custody for any time.
uncoder0 10 hours ago 0 replies      
Besides the initial titlegore, these tools really aren't that surprising. I've always operated under the assumption that if the NSA, CIA, etc are in your threat model you've already lost.
throwaway31763 11 hours ago 1 reply      
I thought they were already compromised since both these services use SMS authentication; since the defaults AFAIK aren't particularly concerned about a change in the public key, it's broken for anything secure anyway.

Tox on the other hand seems much more secure... though I guess if you're phone is compromised you're pretty much screwed to start with (which is not too hard with all the bloatware one needs these days).

james_niro 11 hours ago 0 replies      
Lol at NYT, it says that when jack into an android phone they are able to route the messages to a third party before it gets encrypted
evjim 11 hours ago 1 reply      
This is why we should not rely on encrypted apps running on top of some other platform.

disclosure: working on an open source alternative for messaging

NASA proposes a magnetic shield to protect Mars' atmosphere phys.org
467 points by cyanbane  2 days ago   173 comments top 30
thatcherc 2 days ago 8 replies      
This is incredibly exciting but a few important details are missing.

The first is how big does the structure need to be? I can buy a 1 Tesla magnet online right now but that's probably not what they're thinking of. Would we need a city-sized coil or something like that?

The second is the time scale. They say that the temperature could rise by 4 Celsius and trigger a greenhouse effect, but is that an immediate effect (10 years or so) or century-scale effect? I'm hoping the scientists put out a paper because I'd love to learn more about the specifics of their proposal.

benp84 2 days ago 3 replies      
"can generate a magnetic dipole field at a level of perhaps 1 or 2 Tesla (or 10,000 to 20,000 Gauss)"

I'm just amused by this conversion. Who is this for? Are there people who know one unit of magnetic flux and not the other?

JumpCrisscross 2 days ago 4 replies      
Is there a good entry-level article or video explaining planetary magnetics? I'm an aerospace engineer and I struggle with the difference between magnetospheres, magnetotails, magnetosheaths, magnetopauses and Magneto's psychological struggles.
rhave 2 days ago 3 replies      
The article and paper lacks any indication of the amount of energy needed to run such a magnetic field. What amount would it take to run it, and what sources of energy would be viable for it?
bleair 2 days ago 5 replies      
Does building a magnetosphere for the purposes of protection require an atmosphere, or is it all about capturing and shaping solar wind?

If it doesn't require an atmosphere, could this approach be used to build a magnetospher around the moon? It feels like colonizing the moon first is a much easier and more useful problem to tackle. Once we have a moon colony you can crack water to make fuel and then go "where ever" you'd like - other asteroids, or mars. The moon is much closer and easier to get to, though maybe people need the "excitement" that travel to Mars connotates.

yaks_hairbrush 1 day ago 1 reply      
Here's a thought: Could we put one of these at the L1 point of the Earth-Sun system to protect against solar storms? If something like the Carrington Event happened to our current society, it would be absolutely devastating.

Bonus: Could we make it big enough to encompass the moon?

hwillis 2 days ago 2 replies      
I don't know the specifics of solar radiation well enough to say how much power the deflection would require, but there is an easy order of magnitude calculation to figure out how much energy setting up a field that size would require. It's not the same as consumption but it's a start.

The energy in a uniform field is very simple to calculate: a sphere with a radius of 6371 km (radius of earth, diameter of mars) and 50,000 nT would store about a 10^18 joules, about 6.5% of US annual electricity consumption. At the extremal 500,000 nT that would be 10^20 joules, around 2x the global electricity consumption.

A dipole field would require ~10x more energy, a zetajoule. That's around 2x the global human annual energy consumption, including for heat/transport/industry/etc.

superkuh 2 days ago 4 replies      
Unfortunately this will not work. That isn't to say that the construction, placement, and powering of an artificial magnetosphere at Mars-Sun L1 is infeasible. It can totally be done with today's tech and modest meter scale superconducting rings. Mini-Magnetospheric Plasma Propulsion (M2P2): High Speed Propulsion Sailing the Solar Wind (http://earthweb.ess.washington.edu/space/M2P2/STAIF2000.PDF) probably represents the core concept. Except instead of trying to go somewhere you stay where you are. The M2P2 paper says a 10cm diameter superconducting coil can divert solar wind in a bubble up to 20 km in diameter. It wouldn't take too much to scale that up.

Of course diverting that much solar wind would create a force on the object. In the paper above that force is used to accelerate a craft. It'd be some handful of newtons at 20 km and linearly more to do what this project wants. One way you might get around that is to "lean into" the wind by going down the sun-side of the L1 halo orbit and allowing the force from the diverted solar wind to counter the sun's gravity's acceleration.

But for the vast majority of the lost Mar's atmosphere the kinetic energy needed to achieve escape velocity is not from impact with solar wind ions or other solar wind related/magnetic field means. All those hereafter referenced under the umbrella term "jeans escape".

Instead the majority of the kinetic energy needed comes from the ejected electrons from the sun's light ionizing the upper atmosphere. That ejected electron has quite a bit and it is distributed to the ions it later interacts with and as those ions interact with others. If there was no solar wind at all the rate of atmospheric loss at Mars would drop but not significantly if the sun still shone upon it.

That isn't to say that, having no magnetosphere, Mars (or Venus) does not lose an additional small amount of it's atmosphere to jeans escape mechanisms. But that amount is limited due to the currents created in the upper atmosphere by photoionization creating their own local magnetic field. That creates a bow shock about the ionopause which slows the incoming solar wind down such that it's constituent ions no longer have the energy needed to deliver the boost required for escape. And because of the induced magnetic field other solar wind magnetic field based mechanisms which would pick-up the ions ionized by the sun's light are mitigated.

I love the idea of this but it isn't going to make Mars have a decent atmospheric pressure in just some years.

On a (much) lighter note, I was thinking if you put these artificial magnetospheres all over the inner system and then coordinated turning them on and off you could "paint" the termination shock surface of the heliosphere with different scales of tubulence in charge density. It'd be a multi-hundred AU wide screen visible only from very far away with sensitive polarimeters (detecting the changes in lines of sight charge density through faraday rotation). Might be a decent way to METI since it'd not require much energy or high angular resolution at the other end.

tldr: It's technically and economically feasible. But it doesn't work like suggested for atmospheric protection because almost all the mass loss is from light caused photoionization, not the solar wind (and other Jeans escape mechanisms).

n1000 2 days ago 2 replies      
> The current scientific consensus is that, like Earth, Mars once had a magnetic field that protected its atmosphere. Roughly 4.2 billion years ago, this planet's magnetic field suddenly disappeared, which caused Mars' atmosphere to slowly be lost to space.

What kind of event could cause the loss of a planet's magnetic field?

inlineint 2 days ago 2 replies      
It is unclear what kind of technology are they planning to use for this shield. If it is usual electromagnets, it's unclear where would they take energy from, if superconducting ones than it's unclear that it would be possible to keep them in superconducting state for long.

Wish someone more knowledgable about this kind of tech comment on this.

gexla 1 day ago 0 replies      
I think the science fiction movie "Spaceballs" was head of its time. Planet Druidia had an enclosure to protect its oxygen, but Lord Helmet tried to suck it out with MegaMaid.

NASA might have to think about how to counter such threats, possibly from a future and more hostile Earth civilization.


pcmaffey 2 days ago 4 replies      
Curious why Mars and not the Earth's moon, seems to be the prime target for colonization?
CodeSheikh 1 day ago 1 reply      
"As a result, Mars atmosphere would naturally thicken over time, which lead to many new possibilities for human exploration and colonization.." How long would it take though?
marcofloriano 2 days ago 0 replies      
Best idea I've seen in years on space solutions, specially at colonization. No marketing bullshit, just plain science. And it may be the first practical step on terraform a planet before we try to colonize it.

"While it might seem like something out of science fiction, it doesn't hurt to crunch the numbers!"

Now we need those numbers.

Meerax 2 days ago 0 replies      
Would a distributed constellation of satellites at L1 work? A bunch of smaller magnetic dipole umbrellas working together.. Easier to repair and replace individual satellites without having the whole system go down.
psandersen 2 days ago 2 replies      
Would something like this be viable as part of a geoengineering solution to climate change?

E.g. could we partially cancel out earth's magnetic field in order to control atmosphere loss and thereby control the greenhouse gas effect?

IndianAstronaut 2 days ago 1 reply      
I am curious why we don't focus more efforts on Titan, which already has a thick nitrogen atmosphere.
JoBrad 2 days ago 1 reply      
I believe this is the workshop referenced, and they do have a LiveStream that you can watch.


visarga 2 days ago 0 replies      
> NASA proposes a magnetic shield to protect Mars' atmosphere

That's officially called a "deflector", anyone who's seen Star Trek knows.

peg_leg 2 days ago 0 replies      
Cool, let's build the Death Star there.
novaleaf 2 days ago 1 reply      

A variation put into the Earth-Sun L1 and used to direct focused steams of solar particles onto an earthward target....

Houshalter 2 days ago 0 replies      
Instead of terraforming, would it be possible to build a giant glass ceiling over most of the surface? Humans don't really need an entire atmosphere, just a few hundred feet of it.

This is still infeasible now, but it seems much easier than terraforming. With robotic labor and automation, it may not even be that expensive.

faragon 1 day ago 0 replies      
And what about an artificial moon instead?
gexla 1 day ago 0 replies      
This could lead to life elsewhere in the galaxy to protest climate change on Mars. We might get visitors if we try messing up a second planet.
jlebrech 1 day ago 1 reply      
how about just moving there an polluting the atmosphere as a side-product of just being there?
lutusp 1 day ago 0 replies      
It's a nice idea, and it's one I proposed some years ago:


ganfortran 2 days ago 0 replies      
Good luck they have that money.
jasonme 2 days ago 1 reply      
You hated Trump that much huh
spraak 2 days ago 1 reply      
Why not a shield for earth, when this atmosphere is clearly in need of help, and already much closer to liveable than Mars'?
xchaotic 2 days ago 2 replies      
So let me get this straight - science can't currently beat flipping a coin in terms of predicting the weather more than 2 days in advance, but he can accurately simulate space weather for many years and how it will affect other planets?
Razer targets perfect Linux support facebook.com
415 points by danjoc  4 days ago   233 comments top 30
mathnode 4 days ago 16 replies      
I am one of many disgruntled Razer Blade Stealth and razer core owners, currently awaiting a refund.Some customers are on as many as their 4th or 5th replacement unit. They have no English support in the UK, just a couple of german phone numbers which nobody seems to answer. The forum is terrible and they can't translate emails correctly. It's insulting.

The typical issues are usually related to firmware, which myself and other users would be willing to wait for to be fixed, but Razer's default, almost auto-response, solution is to just send you another unit, with the same issues.

In general though, the razer blade stealth is not even in the same league as an x1, mbp, or xps, and it's not supposed to be. It's just priced the same.

Razer. Apple prices, gateway support.

kobeya 4 days ago 2 replies      
Any Razer engineers here? I'm a proud owner that would like to work with you directly to fix some of the remaining driver support issues.

Also, has anyone gotten the Razer Blade to work with the Razer Core thunderbolt 3 expansion chassis? On default Ubuntu 16.04 it picks up the thunderbolt hub, but nothing underneath. This would be an amazing laptop for machine learning if I could supercharge it with a Titan-X.

Also, consider a 32GB build for us developers...

rohall 4 days ago 3 replies      
This is great! I picked up a Razer 2016/1060 a couple months ago and have been running Ubuntu on it since then.

There's been a few issues, but overall it's been a great machine to transition to after a decade of Mac usage. If you're interested, it will require a bit of configuration (and even then its not 100% perfect just yet). See here for a list of issues/solutions: https://docs.google.com/document/d/1jI2jlVi1V0H8SeNm5kspJ1qX...)

Feel free to ask any questions if you're curious about picking one up!

iotscale 4 days ago 3 replies      
Well, I think I'll refrain from buying razer because of this: https://mobile.twitter.com/internetofshit/status/83665111681...
pbz 4 days ago 6 replies      
If they're reading this, here's my wishlist:

1) 15.6 inch laptop (14 is too small, especially with those bezels).

2) As thin of a bezel as possible (see XPS)

3) Camera on top! I don't want to show my fingers in conference calls. This may force the bezels to be larger on top, that's fine. (Pop-out camera?)

4) Offer the highest quality display from a color reproduction point of view, not the refresh rate. This is NOT a gaming laptop, or at least not a pro one.

5) The resolution should allow me to use the OS without scaling. If you can squeeze more than 1080p (1440?) the better. We need vertical space for coding! Some folks like 4K, so that should be an option.

6) Connectivity is important; lots of ports. Also, Intel WiFi cards, not Killer.

7) The sound should at least be decent. I don't understand how a phone can have better (ok, lounder) sound than a laptop.

8) Give us an option to optimize for power vs battery life. I want the fastest CPU (within reason) and a good GPU (1060 or 1050, but ideally 1060) at the expense of battery life, but others would like more battery.

9) I'm picky about the SSD that goes into the laptop. Either allow me to replace it or allow me to pick what I want (960 Pro).

10) Good trackpad, ideally with buttons.

11) Good keyboard, super important...

12) Under 6lbs

13) Cooling has to be top notch and the air intake cannot be on the bottom. I actually use the laptop on my lap.

14) Height should be around 1 inch. Making it super thin like MBPs is not worth the tradeoffs.

15) That logo needs to go or at a minimum don't make it glow...

acabal 4 days ago 1 reply      
I've been using a Razer Blade 2015 as an Ubuntu-only computer for several years now, and I've recommended it on HN in the past. It's been a great machine, and Linux works surprisingly well. (Of course that means that there are some minor problems, but "surprisingly well" is high praise in the world of Linux on laptops.)

Lots of people here complaining about build quality issues, but mine has been completely solid. The only issues I've experienced are some of the keyboard keys are losing their matte black printing, which is surprising but not fatal.

Additionally, at first I had a hard time getting a Blade that did not have a screen with a pink-to-white tint to it. However this appears to be an issue with all IPS panels--my Nexus 6 has the same issue and you can find people on Apple forums complaining about similar problems on Macbooks. Razer support was very helpful, and they hand-checked a unit to send me as a replacement, which is the one I'm using to this day.

In short, I'd continue recommending Blades for development work as Linux machines, and even moreso if they can iron out the usual Linux driver issues that plague all laptops.

Edit: I should also add that my limited experience with Razer support has been good, in that I got personal replies from people who didn't seem to be reading from a script, and who were happy to accommodate my returns and pickiness about screen quality.

hinfaits 4 days ago 1 reply      
For readers confused as I was, the Facebook post only says they're "looking at" better Linux support. The author buries in a comment a target for flawless Linux support[1].

But I'd expect if flawless Linux support was actually their goal they'd announce it more prominently, which they did not.


f8kr 4 days ago 1 reply      
Razer's Synapse, or cloud based drivers seem like a huge step backwards. Needing to make an account so I can change the sensitivity of my mouse is crazy.

I've heard the argument

* they needed more space on the device so couldn't include the drivers and configuration

* cloud drivers allow portability of configuration

Both of those fall on their face in reality. Lan gaming is mostly dead since most multiplayer games are online and memory space is cheap.

justicezyx 4 days ago 1 reply      
I think the developer community right now is siganificant enough to offer a meaningful market for smaller shop like Razer. If they manage to deliver a relatively well-built (not to the level of apple, something close to xps or thinkpad), with solid linux integration, slightly expensive (like 10% over the mass market models with equivalent hardware spec), I will buy for sure.
geoka9 4 days ago 2 replies      
Please also add trackpoint[0] so that we thinkpad (linux) users can be more comfortable considering switching to your platform.


enknamel 4 days ago 4 replies      
I really like someone stepping up and making a high quality linux laptop. After the thread yesterday about System76 (who I thought was doing that) this makes me really happy. It's also somewhat a threat to Apple. If you aren't doing any iOS development you will now get a really nice alternative to a Macbook Pro.
ryanisnan 3 days ago 0 replies      
To chime in, I recently (this week) purchased a new Razer Blade laptop (1920x1080) with the intention of dual-booting windows and linux, as a permanent replacement for my MBP.

I have to say, the build quality seems top shelf. The laptop itself is a sturdy feeling machine. It booted up out of the box just fine, and within a bit I was supporting 2 player rocket league on an external 34" monitor with no problem.

After some fussing about, I finally got Antergos up and running on a smaller partition, and now it's working flawlessly.

I am not a linux guru, so I had to bash my head a little bit, but here I am...

I cannot speak anything about the quality of the customer service, or how long this laptop will perform admirably (obviously), but so far I am extremely happy.

edit: I should say that I received the laptop today (at time of writing) and getting linux up and running only took a few hours of my uneducated self faffing about.

cdubzzz 4 days ago 0 replies      
For many of Razer's peripherals, there are some very well supported reverse engineered drivers: https://github.com/terrycain/razer-drivers
m12k 3 days ago 1 reply      
Slightly off topic: I was about to ask, in this day and age of responsive web design, why do Facebook, Wikipedia and others still have m. -prefix websites? On further reflection I can only assume it's in order to provide a bandwidth constrained alternative, rather than just a visual change. Still, these look like crap on a desktop, and it's a frustrating leak of implementation details when someone posts a link to Wikipedia from their phone, and suddenly I'm looking at a weird design on desktop. They're so good at detecting when people are on a small screen and redirecting to .m - couldn't they please do the reverse too and redirect from .m to the normal version for desktop clients?

And if someone needs a bandwidth constrained version on desktop (developing countries and people trying to avoid data roaming fees come to mind), then maybe we could come up with a better way for clients to tell this when requesting pages, rather than try to infer it from screen size?

tyleo 4 days ago 1 reply      
I have a 14" Razer laptop from 2016. I am currently running Windows on it, and FWIW I have never been more pleased with a laptop.

I wonder if they will add compatibility for the Razer Core on the Linux side of things.

scurvy 4 days ago 0 replies      
I'll just put it this way: BIOS updates require purchasing a Razer Core or RMA'ing your entire laptop. Not a lie. Not an exaggeration. It's the truth for Blade and Blade Stealth owners.

The experience is borderline terribad. I got things (mostly) working with Linux Mint, but:

1) Runtime is maybe 2 hours.2) It won't sleep when you close the lid (I have sleep hotkeyed instead)3) The USB support is loltastic4) External HDMI connector worked for a week then quit5) Takes 3 hours to fully charge the battery

I could go on but it's Friday. I had really high hopes for this laptop and was ultimately let down by Razer. They should stick to making their peripherals work with Linux, then they can put on their bigboy pants and try to make an entire system.

Animats 3 days ago 0 replies      
They control the hardware. Linux is open source. This is their problem. Why do they need "feedback, suggestions and ideas on how we can make it the best notebook in the world that supports Linux."
kyledrake 4 days ago 1 reply      
I discussed this with the CEO a while ago on Twitter https://twitter.com/minliangtan/status/447658322544439296

I think their best bet would be to have someone at the company work specifically on Linux issues. Dell did this and made a handsome return on their investment.

I currently use a Dell Developer XPS, but would definitely consider the Blade as a candidate for my next laptop if they were good about Linux compatibility in their next release.

ndesaulniers 4 days ago 0 replies      
Here's my damage report for running Linux of a late 2016 Razer Blade Stealth laptop: https://www.reddit.com/r/razer/comments/5orvy5/late_2016_rbs...
abvdasker 4 days ago 0 replies      
Calling it now: Razer is going to eat Apple's share of the *nix-based developer market over the next few years.
notheguyouthink 4 days ago 0 replies      
This is a quick way for me to buy them instead of a new MacBook Pro. Awesome to see someone step up to the plate!
ezoe 3 days ago 0 replies      
I brought a used razer blade stealth laptop from my colleague.

He said it wasn't that bad, but it overheats easily and CPU is almost always forced down clocked because of it. So it didn't achieve the performance he expected from the spec.

Razer Blade Stealth is almost perfect in spec. Built-in 4K display with NO NVIDIA GPU. That's great for the linux.Unfortunately, it's too thin. Not only it's bad for overheating, it doesn't have many useful ports. Especially the Ethernet.

I can't understand why the computer, advertised as the gaming laptop, doesn't have Ethernet.

I don't use it long enough to evaluate it, but so far, it's good for a portable toy computer.

ythn 4 days ago 1 reply      
Anyone else own a Razer mechanical keyboard that doesn't work in GRUB/BIOS menus?

I have a Black Widow Stealth and it works great... except not in BIOS menus or GRUB menus. In those cases I have to plug in my old keyboard to get a response.

clord 4 days ago 2 replies      
At first I thought the coloured keyboard on most of their systems was useless, but I wonder if I could rig it up to switch colour based on vim mode. That might be fun. Is it controllable from software?
dijit 4 days ago 1 reply      
I recently purchased a Dell Precision 5520 because its linux support is terrific (or, it was on the 5510). If they pull this off then the next laptop I buy will almost certainly be a razer. No question.
kseifried 3 days ago 0 replies      
Not a gamer, but wanted a better mouse so I bought some razor mice products over the years, required internet to setup/huge bloated driver/software package, and then hardware wise the build quality wasn't there. Gave up on them and went to the Apple mouse, much happier (didn't realize at the time but gestures are much nicer than buttons if you're not a gamer).
gwicks56 4 days ago 0 replies      
If you are not doing Rails work, do people find Linux that much more productive these days than windows?

I generally buy Thinkpads because of the keyboard and cost (upgrade myself), and used to use Ubuntu probably 85% of the time. Overtime I have found myself doing more and more in Windows, because of both drivers and battery life.

For sys admin in makes sense, but for general dev work, I tend to find everything works just fine on windows, plus it makes day to day usage easier.

ktta 4 days ago 1 reply      
Non-mobile link:


dang, can we get the link changed to this since most phones will redirect to the mobile version, but for some reason, people haven't figured a way to redirect to desktop versions from mobile?

projektir 4 days ago 2 replies      
I got a 2016 Blade recently and installed Arch Linux on it. The only thing I found problematic so far was Nvidia Optimus, which seems a general issue.
chris_wot 4 days ago 1 reply      
This got the attention of Alberto Ruiz at RedHat...
In praise of cash aeon.co
432 points by denzil_correa  4 days ago   353 comments top 4
CPLX 4 days ago 14 replies      
The core concept to remember I think is that there are two ways to pay for things:

* Ways that involve cash or cash equivalents

* Ways where a purchase requires the permission of someone else

Just think of the word authorization, which is a required element of essentially all non-cash transactions. It has the word authority embedded right in it. If you're OK with that concept, you are necessarily OK with the idea that someone you have never met and don't control has the ability to stop you from using your funds in the way you'd like to at any time.

A cashless society is, at a fundamental level, not free.

PaulAJ 4 days ago 1 reply      
The thing about the cashless surveillance panopticon is that it still can't stop illegal drugs. Every prison is exactly that kind of surveillance society, and in every prison drugs are readily available.

HSBC is one of the principle members of this privatised money system. It was found to be laundering money for international drug dealers, and got off with a slap on the wrist. Nobody working at the bank has been prosecuted.

saxonklaxon 4 days ago 0 replies      
Some people might like the idea of all transactions being monitored and controlled. Ensuring that people behave ethically, that they don't avoid tax, etc. The problem is that elitists can't help believing that they know how to run everybody's lives. So it wouldn't stop there and what future regulators claim to be ethical will morph into supporting a tyrannical status quo.
sparrish 4 days ago 4 replies      
After having my card skimmed for the 4th time (at a gas station pump, at checkout at grocery store, etc), I decided to pay for everything with cash. It's not worth the hours it takes to clear up things with the bank and I get a better understanding of where my money is being spent when my wallet gets light.
What happens when you swipe a credit card affirm.com
460 points by trishakothari  3 days ago   167 comments top 18
bigbugbag 2 days ago 3 replies      
> What happens when you swipe a credit card

People look at you funny for not knowing how a credit card has worked for the last 30+ years. Here chips got introduced mid-80's and all cards had chips circa early-90's.

Why did it take so long for credit card to feature chips in the US ? Because it took a while for CIA's in-q-tel to take over control of gemplus, the company who owned the smart card technology, which happened early 2000's. This is known as "l'affaire gemplus". Once they got control, the company HQ got moved to luxembourg tax haven and the R&D and tech moved to the US. A few years later in 2006 it merged with competitor axalto to form gemalto famous for making SIM cards and recently for their SIM crypto getting in NSA hands[1]. In 2009 french government tried to counter the takeover by becoming gemalto's major shareholder, but it was too late, CIA got what they were after as shown by in-q-tel selling its shares in 2010 which coincidentally was the years EMV credit card got introduced to the US.

[1]: https://theintercept.com/2015/02/19/great-sim-heist/

Cyph0n 3 days ago 3 replies      
I'm kind of disappointed that you didn't go into the security features present in a typical credit card transaction. For example, you could describe the crypto protocols used to communicate with the gateway and/or card processor, what kind of data the stripe and chip contain and how it is used for authentication, etc.

Another thing I find interesting is the anti-tamper features that are present in a standard credit card terminal. There's a great CC terminal teardown video by EEVblog that walks you through them: https://www.youtube.com/watch?v=tCgtTPwlDSo.

endgame 3 days ago 0 replies      
Interestingly, interchange fees in Australia for credit cards are going to get capped to 0.8% later this year. This is forcing banks to re-evaluate their reward card offerings. AIUI, American Express isn't affected because they have direct relationships with each individual merchant.

The big 4 banks in Australia have historically offered an Amex card and a Visa/MasterCard linked to the same account (because Amex isn't accepted everywhere and often attracts surcharges). If ANZ's recent move is anything to go by, it looks like the banks won't be able to afford offering a linked Amex with their accounts.

nodesocket 3 days ago 5 replies      
Does anybody know if using Stripe if we can pass the 2.9% fee onto our customers? We offer customized consulting plans that vary and typically in the thousands, so thinking of just adding a 3% credit card processing fee to the invoice for clients that use card.

Researching around the net, it seems for traditional brick and mortar, some issuers don't allow you to charge a percentage based card processing fee. Any idea is this is allowed on Stripe?

digler999 3 days ago 6 replies      
What annoys me is the expiry date and CVV. It seems like their solution to "more secure" is just to add more numbers. How about put the last 3 digits on the back, and call it the cvv ? How about use alphanumeric card numbers so we dont have to use as many digits ? Get rid of the expiry date used as validation. It's just more entropy, if you need more entropy then add another digit. It's just annoying having to type that crap in all the time.
leeoniya 3 days ago 4 replies      
i've always found it to be profoundly messed up that address validation (AVS) and security code validation (CVV/CSC) happen after the funds are authorized and held by the issuing bank.

this means if you choose to decline transactions for cvv or avs reasons in your gateway settings, those funds are still actually held by the customer's bank as pending for ~24h, preventing them from trying the auth again if it puts them over their credit limit.

the only way to release them is for the merchant to call the customer's bank and provide the, cc number and auth code.

we've had to leave these "security" settings off in our payment gateway and just assess the processor codes ourselves. then manually call the bank if there are mismatches. it's a massive pain in the ass and pretty much a neccessity if your avg order is hundreds of dollars. >:(

ikeboy 3 days ago 2 replies      
>If the merchant doesnt settle within a certain time frame specified by the network, then the authorization expires and the reserved funds are released (as with every complex system, there are caveats here too).

Where can I read specifics for each network?

Also, the post doesn't do a good job of explaining what affirm is. All I know is it's some kind of alternative.

ChemicalWarfare 2 days ago 0 replies      
The OP has things mixed up quite a bit.For starters, Point Of Sale flow should be described separately from the Card Not Present (as in online purchases) flow (protip there really is no such thing as transaction void in POS world).There are also things like auth reversals, partial captures etc.
coin 3 days ago 5 replies      
> swipe your card

Sadly swiping is being replaced by chipcard.

joopxiv 2 days ago 0 replies      
Depending on the category of the merchant, the network charges an interchange fee, the cost of which is split between the issuing bank (which takes most of the cut), the card association, and the acquiring bank.

This isn't entirely correct. The interchange is the part of the transaction fee that goes to the issuer. The card network gets a scheme fee, and the acquirer/processor charge a fee on top of that. If a merchant pays the same rate, regardless of what type of card is used, (credit/debit, consumer/business) then the processor or acquiring is basically rounding up.

abalone 3 days ago 2 replies      
> Visa and Mastercard alone control 70% of the market based on purchase volume. And lack of competition reduces the incentive for firms to improve the efficiency of their technological systems or price their services fairly.

Some detail about the pricing might help here. Most of the fee is actually going to customers, believe it or not, and sometimes processors. It is an interesting study in complex two-sided market dynamics. This is critical to understand when thinking about how to improve the system, and it's something many startups figure out the hard way.

Most of the processing fee goes back to the issuing banks, not Visa/MC. On a typical transaction:

1. Visa takes "only" 0.11% + 2 cents.

2. Depending on the card maybe 1.5-2% + 10 cents goes to the bank that issued the card (e.g. Chase, Citi, Bank of America). These are fixed and published rates called interchange.[1]

3. The remainder of the fee is processor markup (e.g. Stripe, First Data). They are responsible for merchant fraud so their markup can vary a lot depending on merchant risk profile and bargaining power. E.g. Starbucks pays virtually nothing in processor markup, while Stripe's markup on thinly vetted ecommerce sites is huge.

So while 0.11% of all card transactions is a lot of money for Visa/MC at scale, most of the "high" 2-3% card fee is actually going back to banks. Competition between Visa & MC keeps their cut to where it is. Even though they control "70% of the market", there is two-party competition there.

So what's the deal.. what are fees so high? Don't banks compete with one another? Well, yes, they do, but it's not by picking lower interchange rates. There's one more piece to the picture: card benefits like reward programs that refund 1-2% to the customer, thinning the bank's take considerably. Banks compete on how much reward to pay out to the customer.

Merchants dislike higher fees, obviously, but given the importance of card acceptance to completing a sale, they are more tolerant of fees than customers are tolerant of poor card benefits. The bottom line is banks need to compete for customers more than card networks need to compete for merchants.

The only way merchants have been able to reduce card fees is through regulation. Visa/MC have built systems that effectively leverage the power of customer choice. If you want to build something different you need to look at the customer benefit side of things, not just offering merchants a lower-cost method.

[1] https://usa.visa.com/dam/VCOM/download/merchants/visa-usa-in...

sAbakumoff 3 days ago 0 replies      
Gateways like Stripe or Braintree are used for online payments only, aren't they? They are not involved when you swipe a card in a physical store's terminal.
xaduha 3 days ago 0 replies      
Swipe? Swipe is old and boring, no single mention of smartcards in the article or in the comments.
ourmandave 3 days ago 0 replies      
That title is a lot less fun if you're standing in front of a whiteboard and someone is judging your answer.
amelius 3 days ago 1 reply      
It doesn't say if the data ends up with advertisers, but I wouldn't be surprised.
flylib 3 days ago 3 replies      
anyone got a collection of more posts like this?
dangerboysteve 3 days ago 1 reply      
I stopped reading after the first sentence "Ninety percent of Americans have used a credit card" which I find to be completely untrue and fabricated.
You May Want to Marry My Husband nytimes.com
577 points by dankohn1  4 days ago   212 comments top 25
incanus77 4 days ago 6 replies      
Even when you try to live for today, things can broadside you.

Last year, my wife, age 36, went from the healthiest person I knew to dead in four months from aggressive pancreatic cancer.


These things happen and underscore the frailty in thinking that the present continues on unchecked and is bound by some sort of rule of inertia.

Don't wait; do the things you want to do today.

sndean 4 days ago 3 replies      
Just yesterday my fiancee told me about her day:

A relatively healthy looking (and his chart agreed) 41 year old guy walked into the clinic with a severe headache around 10am. By around 4pm she had to tell his family he was likely brain dead.

Because of this and other things she's seen recently, we're planning more trips. See the Seven Wonders of the World.

I've only left this timezone a few times before. And we've both spent the first ~30 years of our lives in school... rethinking how much weight we've been putting on things.

somesickdude 4 days ago 3 replies      
I'm not posting under my normal account as would prefer not to publicly disclose this to everyone.

There is no right answer as to whether we should save for the future or spend every thing we want to do now and say fuck it since tomorrow may not come.

I do believe it would serve well to try and strike a balance if possible.

Forever I was only a penny pinching saver and thought everything could wait until I was able to retire (hopefully early).

Then at age 29 I went in to see a doctor after I started to get extremely sick. I was a health/gym nut, but over the course of a year I had zero energy and couldn't recover from workouts. I started having blurred vision, shooting pains, seizures, vertigo, random falls, blacked out a couple of times, dropped about 30 pounds, and most of my hair fell out.

After seeing quite a few doctors they found I had a developed multiple auto immune disorders, one of which attacks the pathways in the brain.

When I was diagnosed several neurologists came and spoke with me and explained they could treat the symptoms, but that was it. The prognosis was and still is, they don't know. The illness is remitting relapsing so I will get so bad I'm that I honestly want to just die so that my pain can stop. Then six months later I'll feel almost normal and wonder how long it will last.

The doctors say if I survive it long enough I will eventually be in a wheelchair or housebound. Maybe though my lungs will just stop working long enough that I suffocate.

The week before last I was back at the gym working out like nothing is wrong, then last weekend I collapsed in my driveway. Seizures are back and my wife wants me to get a cain or something to help prevent falls as this one about landed me back in the ER.

RangerScience 4 days ago 2 replies      
"To suspect your own mortality is to know the beginning of terror; to learn irrefutably that you are mortal is to know the end of terror."(Children of Dune)
Bjorkbat 3 days ago 2 replies      
Curiously this has come at a weird time for me. I didn't think much of healthcare. I just turned 27, and like most twenty-somethings I have this idea that I'm immortal (not literally, but you get the idea). Lately though I've been dealing with a great deal of stress and anxiety related to my job, stress combined with the sickening feeling of doing something that in the long-term just feels so pointless. Coincidentally the internet seems to be preoccupied with cancer and death as of late, or maybe it could just be me. Prior to reading this earlier I found this arresting article regarding colon cancer rates rising in people my age.


It has made me thought about my own mortality, but that only occupies half my thoughts. The other half are preoccupied with this sickening reality that our 21st century technology really isn't all that great.

Sure, we are getting better at beating cancer. Immunotherapy seems especially promising. On the other hand though, it's almost maddening how many people develop a mild pain and find out it's late stage cancer after a doctor's appointment. Had they ignored said mild pain it could've progressed to the point where they're terminally ill.

Facebook can probably build an elaborate psychological profile based on my online habits. It knows my mind. How strange that we can't hope to know our own bodies so well.

RikNieu 3 days ago 0 replies      
I often reflect on the fact that everything is temporary. The people in my life, my abilities, my opinions and attitudes...

I learned this sharply throughout my twenties, and it lead me into a deep dive into Buddhism.

The past is gone, the future is a figment of my imagination. My interpretation of what's happening now is up for debate... Nothing I rely on, or want to rely on, is stable and dependable. All is subject to change, the good and the bad.

You have to find your anchor point or stability in the present moment. If you pay attention, you'll see that that is all there is, really. And it comes and goes, always, fleeting.

Buddhist monks have to reflect on the '5 Remembrances' every morning. It might do us all good to do so too.


rcarmo 4 days ago 2 replies      
This is the sort of thing that reminds me there is life outside work, and that we should spend a lot more time enjoying the people we are supposed to be spending it with.

(Commuting home, so the time spent online and tapping this out is not misplaced.)

At the same time, it is a remarkably poignant and romantic way for her to both affirm her love and reinstate my faith in mankind.

My heart goes out to her and her family.

balabaster 4 days ago 1 reply      
Okay, my day is over and I'm going home. I'm done.
xutopia 4 days ago 2 replies      
It's a sad story but I love the way in which she welcomes the potential lover that will one day replace her. Makes me think of what the polyamorous call compersion.
touchofevil 4 days ago 1 reply      
I have found these samurai death poems to be pretty interesting when contemplating one's mortality.


DiegoRamirez 4 days ago 4 replies      
Why do I have Steve Jobs's like magical thinking? I'm in my mid-40s, work out a lot, and feel that nothing can touch me like a 15 year old?

Nothing wrong on surface, but doing a comprehensive blood workup on Monday. Still not scared at all

But why do I think I am immunine (sic) to Cancer. I think I can lift, run my way out of anything. "I am a man among men".

That's obviously BS. When the inevitable "something" comes down how do I deal with it? e.g. Supermen die of old age! not!

finkin1 4 days ago 0 replies      
"There are ways to really live in the present moment. What's the alternative? It is always now. However much you feel you may need to plan for the future, to anticipate it, to mitigate risks, the reality of your life is now. This may sound trite... but it's the truth... As a matter of conscious experience, the reality of your life is always now. I think this is a liberating truth about the human mind. In fact, I think there is nothing more important to understand about your mind than that if you want to be happy in this world. The past is a memory. It's a thought arising in the present. The future is merely anticipated, it is another thought arising now. What we truly have is this moment. And this. And we spend most of our lives forgetting this truth. Repudiating it. Fleeing it. Overlooking it. And the horror is that we succeed. We manage to never really connect with the present moment and find fulfillment there because we are continually hoping to become happy in the future, and the future never arrives." - Sam Harris (https://www.youtube.com/watch?v=ITTxTCz4Ums&t=17m52s)
chris_chan_ 4 days ago 0 replies      
Thanks, remind me that life is fragile. Ok, I am going to hospital and do a full body medical diagnostic scan now.
thecrazyone 12 hours ago 0 replies      
Such a heart warming thing to do, this made me tear up now

Such a beautiful person (the wife)

I sincerely wish things were better. My hear goes out to you :')

aj7 4 days ago 1 reply      
I've always been a saver and a dreamer. I get tremendous satisfaction simply reading and observing. This is unbalanced. So I married a very physical ball-of-energy life-of-the-party ex-ballerina. She survived simultaneous bouts with early-stage breast and ovarian cancer, and is healthy as a horse, knock on wood, and out socializing with her buddies right now.

We purposely live beyond our means to maximize actuarial utility. You know what I mean. But the stress of living like that on this indoor cat is, ironically, killing me.

bradleyjg 4 days ago 1 reply      
Heartbreaking. Not sure what else there is to say.
zymhan 4 days ago 0 replies      
Well, fuck.

edit: Those aren't the feelings I wanted on a Friday evening.

Guess I'll go hug my SO

staunch 4 days ago 2 replies      
To me, the worst fact of life is to realize how intensely and tragically our ancestors suffered from lack of medical technology, and that it continues to our day.

And the greatest fact of life is that all this suffering will soon be at an end.

Our descendants will pity us, as we pity ours.

sxates 3 days ago 0 replies      
This makes me want to be a better husband.
davidf18 4 days ago 1 reply      
A few years ago I would have been the victim of a terrorist attack in Jerusalem had I not gone into a store to buy a candy bar.
19eightyfour 3 days ago 0 replies      
Have a pleasant journey, Mrs Krouse Rosenthal
minimaxir 4 days ago 2 replies      
I'm curious as to why this submission was flagkilled. Yes, it's not related to tech, but it's good writing.

EDIT: Apparently unflagkilled. That's rare.

CiPHPerCoder 4 days ago 3 replies      
...Why is this flagged?
zymhan 4 days ago 1 reply      
gankedfrank 4 days ago 8 replies      
The Unix-Haters Handbook (1994) [pdf] mit.edu
381 points by arpa  4 days ago   306 comments top 7
dsmithatx 4 days ago 5 replies      
When I read this 20 years ago I would never have believed I'd be typing this on a Mac laptop running yet another Unix variant. This line is now so funny...

As for me? I switched to the Mac. No more grep, no more piping, no more SED scripts. Just a simple, elegant life: Your application has unexpectedly quit due to error number 1. OK?

Philipp__ 4 days ago 6 replies      
I like to cover myself with some crazy question and think about them. For example, what would look like the thing that would make people go "oh this looks good enough to replace UNIX".

Don't get me wrong, I am big fan of UNIX, but I hope I will be alive around the time(but I doubt that) when we will see some new thing which will make UNIX feel dated. Now, some of you might jump and say "Oh, but UNIX already feels dated", and that would make conversation on it's own, but I think people say that more because they are bored with UNIX, or they dislike certain segments.

And what breaks my heart is general disinterest in Operating systems with young developers/students (I am student too, but I find those things to be most interesting of all university courses). I see very little people doing OS work today. I wasn't there to see how it was in 80s & 90s, but from what I've read, you had much, much more choice, but their quality was debatable. Why we always consider Operating Systems to be "solved" thing? Is it because the way Von Neumann architecture works, we tend to abstract computer as an entity in the way that gave us UNIX and that we won't be able to discover and make something different but as capable as UNIX without changing our way of thinking about what computer is and how it works? Did we got used to computers as they are, especially newer generations, taking things for granted, and just going forward with what they inherited?

open-source-ux 4 days ago 3 replies      
It's interesting that no one blinks twice at the thought of using a decades-old operating system - it's "mature", "battle tested", "proven over time" or some other phrase.

On the other hand, programming languages that are decades old are looked upon as antiquated and not fit for modern problems, so we create new languages and discard the old ones, including forgetting useful ideas those languages contained. Sometimes, we come back to some of those ideas. So a dynamic language decides that types are in fact quite useful, and a single exe file is so much simpler to distribute than endless tiny script files, and maybe speed does matter after all.

Not passing judgement, just saying that's how it is. And it is rather odd isn't it?

raverbashing 4 days ago 4 replies      
Unix is weird because it evolved organically and without a unified direction. But it remains because power and familiarity beat user experience.

Yes, the "pure" Unix tools are awful, GNU improved on their usability a lot. But they're still a simple command that does something.

Except Autotools. Those should burn in eternal damnation.

dwheeler 4 days ago 1 reply      
Funny book, I have it in my hands (with the barf bag!). The anti-forward by Dennis Ritchie is awesome all by itself.

Some of it is long obsolete. For example, Usenet/NNTP is rarely used today. And many of the specific implementation problems they mention are long-fixed on modern systems.

Some of the problems they note are still valid. Some of the challenges of dealing with filenames are absolutely still true; because filenames are sequences of bytes (NOT sequences of characters), and allow stuff like leading dash, control characters, and non-characters, you can have a lot of problems. See the stuff on page 168-171. I talked about this in https://www.dwheeler.com/essays/fixing-unix-linux-filenames.... and https://www.dwheeler.com/essays/filenames-in-shell.html

That said, there are reasons that Unix-like systems took over everything. Many of their complaints are from lack of consistency. But some loss of consistency is inevitable when you have a marketplace of many people with many ideas. Many of the other systems they remember fondly were often tightly controlled by a small number of people - they were more consistent, sure, but they were also slow to implement newer ideas. When you're running a race, the one who runs faster usually wins.

omginternets 4 days ago 8 replies      
While entertaining, I'm left with the following question: if not UNIX, then what?

Are there any successful non-UNIX-y OSes that are worth checking out?

I may embarrass myself here, but I was under the impression that BSD, Plan 9, Solaris and HP-UX were all UNIX-y ...

grabcocque 4 days ago 4 replies      
If I ever need to feel good about myself as a software developer, I only ever need to read this book's chapter about the X window system.
Why Facts Don't Change Our Minds newyorker.com
406 points by ryan_j_naughton  14 hours ago   270 comments top 17
unabst 0 minutes ago 0 replies      
Fake news is essentially a man-in-the-middle attack.

Who's actually been to Iraq or Afghanistan? Who has actually seen first hand evidence of global warming or water pollution or of anything?

Most facts are too detached from ordinary reality. That's why the person in charge of relaying them can alter them as they see fit.

We're not to the point where we're faking wars (I don't think), so there are starting points. But the story, the narrative, and the facts of the matter are all subjected to manipulation. Even the "honest" media is guilty sensationalism.

But the point is, when these are your "facts", you can't turn around and tell normal people to be more scientific or to choose your sources more wisely. You saying that is just another channel, waving yet another banner that says science, and a proponent of your facts.

brightball 12 hours ago 10 replies      
Another factor here is when people abuse the word "fact" regularly. If you ever get into a discussion with somebody who claims they've proved their point by citing "facts", then actually follow up on their citations and discover that it doesn't actually say what they think it says it creates an impression of the messenger, not the message.

Enough messengers saying the same faulty message over and over and you distrust the message just because it's been repeated so often.

This is an easy thing to do with conversations on subjects like guns where the key of slanting the message is based on subtle wording changes that allow you to leave out some data or include other data. The person presenting their information thinks they have "facts" because they see numbers that support their point of view without knowing what's been left out.

I seem to remember a github repo that was posted to HN a couple of years back that did exactly that. They showed the same data set and presented 3 different ways with 3 entirely different conclusions.

bambax 12 hours ago 6 replies      
> The fact that both we and it survive proves that it must have some adaptive function

No, no, no, absolutely not, no, it doesn't. No. A feature can maintain itself in a population for a number of reasons:

- it's neutral or not detrimental enough to drive affected individuals to extinction (ie, to get them killed before they have a chance to reproduce)

- it's linked to some other trait that provides actual benefits

- it's so recent it didn't leave time to be selected against

- or, yes, it actually has some adaptive function.

But just because a feature is found does NOT "prove" anything in and of itself.

> Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they werent the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Excuse me, what?? There's little advantage in reasoning clearly while hunting prey and making tools and building traps to catch huge mean animals that can and will kill you if you do anything wrong?

And there's much to be gained from "winning arguments" in a cave, while pondering about one's social standing?? Come on.

* * *

I suggest a more simple and straightforward explanation for the limitations of "reason": our brain is, in fact, write only.

We usually don't notice it because it also has a huge capacity and so we can always write more, it never gets full.

But to one given named item corresponds a unique value, that cannot be overwritten.

If you want to store a new value you have to create a new name to store it with.

To "overwrite" something you can create lookup tables of sort, that tell you that the old value is in fact "wrong" and that said item should be associated with another, newer item. But this is costly, and so is avoided whenever possible (because the first rule of life is to be lazy, ie to conserve resources).

Mikeb85 11 hours ago 2 replies      
Facts don't change our minds because we make up our world views and beliefs in a way that's expedient for us to happily continue about our lives.

For instance, we all NEED to believe we're the 'good' guys. Whether the 'we' is our religion, our job, our state, political stance, etc... This prevents existential angst from realizing we're in the wrong, and then needing to change our behaviour which would disrupt our everyday lives.

This is also the reason why we stay in failed relationships, keep toxic friends, jobs we hate, etc... It's easier and more conducive to our survival instincts to keep the status quo, whatever it is. Only when things get extremely bad do we ever change.

On the flip side, let's say popular opinion about a particular topic flips. When it becomes expedient to change our opinion, then we're very quick to do so. When keeping an archaic opinion begins to cause friction in our everyday lives, we change to the prevailing opinion because again, staying with the tribe is easier than going against it.

But of course, realizing all of this would be to reduce human existence to base survival instincts, which would also ruin our self-narrative. So we think we're all special, enlightened, unique and think for ourselves no matter how much the evidence points to the fact that we all devolve into holding the same popular opinions.

sgt101 12 hours ago 2 replies      
Vaccines are safer for a population, and they are safer for individuals in an unvaccinated population. In a vaccinated population the risks to the individual having a vaccine are low, but non zero. The risk of not having the vaccine may well be lower. The problem is that by not being clear that having a vaccination is an altruistic act with marginal risk but a massive social good the way is left open for claims of duplicity. This may be the key issue; some people won't agree with the desired position because the fact we are using isn't a fact at all. We fail to persuade because we don't make a good case.
methehack 11 hours ago 1 reply      
I guess it should come as no surprise that the comments section for an article called facts dont change our minds would be so disappointing. For me, this article and many like it that point out how whimsical and feeble our natural, undisciplined cognitive capacities are are a rally cry to continue the enlightenment project, as best we can and in as personal a way as we can. Years ago I learned that my family motto from way back was through faith we are free. I found that immediately alienating because for me it was always through doubt we are free.

From this article to my comment to current events:https://www.nytimes.com/2017/02/28/opinion/the-enlightenment...

baldfat 11 hours ago 2 replies      
The issue isn't facts. The issue is our worldview. If it is a binary worldview (Good vs Evil) then what we don't agree with is always bad/evil (Black Hat Cowboy) and what we do agree is good/good (White Hat Cowboy).


So when our side makes a mistake we defend and spin till it is okay for us to still think of them as wearing a White Cowboy Hat. Anything good will also have conspiracies and spin so that they always wearing a Black Cowboy Hat.

If you want evidence ask anyone with a strong feeling for Trump/Clinton and you will see this in action.

The issue is when people change their minds (with a binary worldview) it isn't accepting a small fact but it is a whole worldview change of astronomical proportions. I feel that people have gotten more and more binary and we need more shades of gray where it isn't world changing to accept a problem with a fact.

From Asia thinking about American Worldview: http://www.atimes.com/article/americas-binary-worldview-kill...

Qcombinator 6 hours ago 0 replies      
I'm tempted to file this as another "Humans think oversimplistically, says oversimplistic study". The world is a massively complicated place, and nobody can come close to understanding it all. If your intellectual background tells you that vaccines don't cause autism and somebody comes to you with a study that purports to say they do, is rejecting it "confirmation bias" or is it in fact the rational thing to do, even if you lack the medical expertise to explain where the study goes wrong?

Of course, your background beliefs are never going to be perfect, so sometimes you will reject the wrong things, but that's not because our reason is "broken" per se, it's an engineering trade-off: we have to take certain shortcuts, make certain assumptions, apply certain guesses, because working everything out in full mathematical detail just isn't possible.

(That said, I'm certainly not claiming that most people are brilliant thinkers the line "Coming from a group of academics in the nineteen-seventies, the contention that people cant think straight was shocking" would come as news to everyone from Aristophanes to Zamyatin.)

charles-salvia 11 hours ago 0 replies      
Evolutionary explanations are often speculative just-so stories with no basis in actual genomic research. Nobody ran a hierarchical clustering algorithm over the human genome, inspected the dendrogram, and then found meaningful gene expressions that indicate confirmation bias somehow.

The reality is confirmation bias is probably deeply rooted in many other elements of human psychology. My (completely worthless) conjecture is that your "opinions" are deeply associated with your own sense of self-identity and self-worth, and therefore they register as very much worth protecting, regardless of external data. Opinions also often overlap with tribal affiliations, which makes them even more deeply rooted in evolved primate cooperative behavior, and therefore even more worth protecting.

makecheck 7 hours ago 2 replies      
When people find out who said a particular thing, the what seems to fall by the wayside and the who seems to matter more. The statement is accepted or rejected outright because of the person. Maybe it is time to change that.

For instance, imagine a world where every bit of text you encounter every news story, every tweet, every statement, every message had no attributions AT ALL. Meaning, you have no idea if something was written by a celebrity, a politician with a (D) or an (R), your best friend, or your worst enemy. At that point, quite a bit of bias should start to filter out!

Or, there could be a middle ground, a kind of anonymous key to recognize statements by the same person. Say you read a story last month, and agreed with that person; this month, maybe there is a way to find out if something else you read is from the same person but NOT a way to see exactly who that is. Then, you can start to build a sense of trust in sources but are still not able to be influenced by other factors.

JackFr 11 hours ago 0 replies      
> If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

The complete absence of self-knowledge and high level of unintentional irony here is astonishing. "Confirmation bias and groupthink happen to other people, not me."

mkalygin 11 hours ago 2 replies      
I recently was thinking on the similar topic after reading an article about scientific myths most people believe. For example, one of such myths is that humans are genetically predefined and can't be changed through life. It's clear that there are a lot of social, cultural and gender factors impacts on our nature, but still there is such a belief.

So my conclusion was that our reasoning is very restricted. We tend to simplify our thoughts, our memories. And all this stuff is highly dependant on emotions we feel. That's why we like to make our memories brighter than they are, make them more romantic and ideal. Everything in the past was better, etc. That's why we remember facts which are easier to remember and close to our beliefs, sometimes no matter are they true or not.

skybrian 8 hours ago 2 replies      
It's important to remember that most scientific disputes can't be resolved by two uninformed people talking on the Internet. If you can get people to share links to well-written scientific articles, you're doing well, but these disputes can only really be answered by actual scientific discussion between actual experts who take the time to dig into methodology (etc), and maybe not even then; the replication crisis is a thing.

With enough study, maybe you can become an expert. Mostly we don't. Reading a few articles and playing "instant expert" is just a bad habit.

Remembering your ignorance is good; try not to forget that you knew nothing about the hot topic of the day before it became news and you suddenly became interested. Unless you have personal experience to share, we're really just sharing links here.

enord 12 hours ago 3 replies      
The interactionist perspective presented is enlightening. It both motivates and explains many of the classic reasoning snafus that we "suffer" from. In the end our mental artefacts (including beliefs) are just tools, and their value (or "truth") is a function of their utility.

If we were to present a 19th century carpenter with a nail gun he would agree upon demonstration; "that is very nice and all, thanks but i'll stick to my hammer". He would have neither a compressor, gasoline and a long enough hose, nor a wall socket and extension chord. The nail gun has no place in his toolshed because he doesn't have the required infrastructure and context to put it to use, and therefore the utility of the nail gun to him is close to zero.

Beliefs are the same, and a relation to "facts" or "truth" is not a requrement for their utility. When presented with new facts and truths we always evaluate their utility before we ascribe to them. We might fool ourselves that this process is somehow "rational" but rationality is just a social preference with some second-order effects with regards to science and the developement of technology (which indeed has "utility" but is far from the only "utility" beliefs can achieve).

veli_joza 12 hours ago 2 replies      
The source of confirmation bias is explained as evolutionary mechanism that developed capacity for reasoning not for critical thinking but for purpose of arguing and getting others to do our bidding. In that context it would make sense to protect our own opinion from inconvenient things such as facts.

> "This is one of many cases in which the environment changed too quickly for natural selection to catch up."

How can we as a society fix this? If we educated our kids about confirmation bias (and some other cognitive biases) and taught them how to compensate for shortcomings of their minds, would next generation grow up to be more reasonable? If so, shouldn't this be a high priority of educational system?

YCode 12 hours ago 1 reply      
The described permanence of lies even after they are revealed as such really lends weight (dare I say credit?) to the tactics of fake news and propaganda in general.

The value of being the first to form an impression of an event, law, etc. on someone even if that impression will later be proven false just can't be overstated.

eagsalazar2 11 hours ago 4 replies      
Translation: "Why did Trump win the election even though he and his policies are obviously nuts?"

Seriously, not trying to inject politics or my opinion on Trump, but there do seem to be a lot of these "why do people make act so crazy" articles lately. I too am trying to make sense of it all and failing.

Snap Tumbles Below IPO Opening Price as Analysts Say Sell bloomberg.com
339 points by chollida1  1 day ago   213 comments top 25
wayn3 17 hours ago 5 replies      
Here's one thing people somehow don't understand.

An IPO is a mechanism for a company (and its investors and other stakeholders) to sell shares in exchange for money.

The investment bank that facilitates the IPO has the job of "marketing" those shares (and doing the administrative stuff around the IPO).

The goal in selling something is to get the best possible price. Therefore, a stock price that goes up at the date of the IPO and then declines below the initial price is the GOAL. It means the bank has done its job well. It got the maximum amount of money out of buyers.

If the price stood level, it would mean the bank had left money on the table. If the price went up immediately and then STOOD THERE, it would mean the bank severely underpriced the IPO. Thats the worst case scenario.If the price dips immediately and nobody buys, it just means the IPO was overpriced.

This also means that buying at IPO means that you're getting screwed unless you want to hold longterm. You're always paying a premium, unless you believe that the investment bank made a mistake. And you obviously have more data to make that bet than goldman sachs does. Obviously.

As a smalltime investor, for whom there is essentially unlimited liquidity in the market, just don't buy IPOs. And don't get excited over IPOs dipping below opening price. It just means you don't understand the mechanism.

PhilWright 1 day ago 5 replies      
I love the way all reporting of stocks has to show the stock either 'surging', 'soaring' or 'tumbling', 'crashing'. An IPO priced at $24.00 that sells three days later at $23.77 has not tumbled. Sure, it was relatively volatile in the first few days but that is common on high profile IPO's until the price settles to a market determined level.

EDIT: No doubt if it rises to $25.00 tomorrow it will have 'rebounded' and be worth another hyped headline.

nodesocket 1 day ago 7 replies      
While I don't believe that $SNAP financials are solid (I am bearish), I do applaud them and Mulesoft for going public.

Too often today large (well past series C) tech companies (I won't name any names) refuse to go public and instead continue to take private equity and foreign investment and prop themselves up on absurd valuations with no-checks and balances. Some of these companies are absolute dog shit, yet they continue to receive unlimited funding and insane valuations in a vacuum with no market to short them.Wall Street calls bullshit when they see it, and smart people short bad companies when they find them. Silicon Valley doesn't provide a way to short these propped-up companies, they just continue to self-promote themselves on private equity and foreign investment.

JumpCrisscross 1 day ago 4 replies      
Before Snap's IPO, I declined when asked to make predictions. But I did say what I thought would be the best for venture-backed companies in the long run. It is a small pop on IPO day followed by a months-long drag down to a reasonable valuation.

The small pop would signal healthy IPO conditions. The slow run down would show investors aren't in full-throttle bubble mode. We got the pop, plus some, as well as the the drag down, albeit sooner and faster than expected. If SNAP continues to between here and the offering price we can still chalk this up as a win for SV.

(Keep in mind, Facebook--whose revenues grew 54.2% from FYE 2015 to FYE 2016--trades at 14.4x FYE 2016 revenues. That implies a $5.82bn valuation for Snap. It's trading at a $30.4bn valuation, or 5x higher.)

[1] https://www.google.com/finance?q=NASDAQ%3AFB&fstype=ii&ei=4-...

Disclaimer: this is not investment advice. Please don't be a numpty and buy or sell securities based on Internet comments.

jondubois 1 day ago 2 replies      
Their finances look horrible; worse than Twitter. Unlike Twitter, however, they make almost 0 Gross Profit (not even counting admin expenses and R&D). It looks like they're counting entirely on growth but the revenue numbers still look very small. Twitter's revenue is $2 billion+, Facebook is $27 billion+. Snap's $400 million looks measly by comparison especially when you consider the expenses incurred.
ucha 1 day ago 1 reply      
It's below the opening price which is not a big deal. FB on the other hand quickly dipped below the offering price. If the 3 days expected value of a recent IPO is significantly different from the opening price, that would create an easy arbitrage opportunity.
soheil 1 day ago 5 replies      
It is amazing how a few analysts can change how much a company is valued on the stock market. Has the opinion about Snapchat on tech forums like this, whose opinion should matter since we're literally the people who built it have significantly changed since two days ago to justify a 4 billion dollars price change?
swingbridge 1 day ago 0 replies      
The company's financial fundamentals are terrible. The only thing holding the value up is fluff and hot air. If people get less excited by that the price goes way down and that's what happened today. If people really really lose excitement the price will fall like a rock since there's no typical financial fundamental circuit breakers (like P/E ratio) to stop the thing from just going down down down.
panabee 1 day ago 3 replies      
It is difficult to comment on whether Snap is overvalued without analyzing its potential worth. Instead, here are a few observations about Snap's business:

* It seems unlikely Snap could plummet in popularity over a few months like some mobile game or Taylor Swift song. If Snap flames out, the usage graph will probably decay more like Skype than Draw Something. The two prongs of its platform allow Snap to (1) enable communication like Skype and (2) distribute content like YouTube. Distribution and communication companies tend to decline much slower than content creators like Zynga.

* Snap's entertainment platform could finally crack interactive TV/video. This will be fascinating to watch unfold as Snap empowers content creators to create digestible, interactive video for the coveted advertising demographic of users aged 15-34. In particular, could adding a buy button marry video and commerce in a novel and lucrative way?

* Through videos and pictures, Snap can infer a lot about users: style, hobbies, habits, food preferences, and more. This is also a hidden treasure trove for Instagram and Facebook, but given the high frequency of Snap messaging, the data could be richer and more authentic with Snap.

chollida1 1 day ago 4 replies      
I thought this was relevant more so because of all the analysts covering SNAP, not a single one has a but rating on it.

That's pretty unusual for a 10+ billion dollar company

wufufufu 1 day ago 1 reply      
Maybe the strategy is to start at obscenely overpriced so when you "tank" and descend into just very overpriced, the market thinks you might soon return to previous heights and you settle around there.
AndrewKemendo 1 day ago 3 replies      
People keep saying that their financials are terrible, but last I checked they were killing on Revenue [1] compared to the average IPO [2]. Especially if you consider that average revenues at IPO have been steadily dropping since 2012.

Average IPO revenue 2015: $38M

SNAP revenue 2015: $58M

SNAP revenue 2016: $350M

Add to the fact that in 2015 on 30% of IPO companies were profitable, they seem to be well within market bounds.

[1] http://www.businessinsider.com/data-shows-nearly-half-of-sna...

[2] https://www.wilmerhale.com/uploadedFiles/Shared_Content

wernerb 1 day ago 2 replies      
I can't argue with the logic of the talk show guest, (who seems very good at her job!)

They are dealing with economy of scale. If facebook+google have 90% ad penetration then snap must displace facebook.

josh_carterPDX 1 day ago 0 replies      
I have always found it fascinating the role analysts play to impact the price of a stock. Today was the perfect storm of shitty news/advice that drove the price down. Before this morning the stock was up in after-hours trading.
rmoriz 1 day ago 1 reply      
Most comments focus on $SNAP itself, but as JumpCrisscross noted: It's also a bad sign for SV IPOs in general.

With the upcoming FED hikes and the recent stock market rally, one could say that it will be a lot harder for late stage companies to aquire further financial resources in VC/the private equity market.

After the $SNAP hype and crash the public market will probably not be willing to invest in other tech companies in the next months anymore, at least not at the current valuation level.

holri 17 hours ago 0 replies      
'Weighting the evidence objectively, the intelligent investor should conclude that IPO does not stand only for "initial public offering". More accurately, it is also shorthand for: It's Probably Overpriced, Imaginary Profits Only, Insiders' Private Opportunity, or Idiotic, Preposterous and Outrageous.', Benjamin Graham
billions 1 day ago 1 reply      
Would it be possible for Facebook to execute a hostile takeover of Snap, now that Snap is public?
verelo 1 day ago 0 replies      
Can we agree that trading at 93 times revenue should be the real headline here? While I find it insane, and others will disagree, it's the real talking point which ever side you sit on!
sfilargi 1 day ago 3 replies      
Isn't this what happened with FB as well?
remyp 15 hours ago 0 replies      
The borrow rate my broker (IB) is quoting is 120% despite only 7.55% of the float being short.

I guess the clearing firms just don't have any shares to lend out -- we could see another price drop once the stock is easier to borrow.

fratlas 1 day ago 0 replies      
Eh, just volatile atm. Interesting to see if it tanks big time.
tommynicholas 1 day ago 0 replies      
Yes it opened around this price, but the IPO price was $17. Another non-story about a stock dropping in price after it grew like crazy just before the drop.
vit05 1 day ago 0 replies      
SNAP inc priced the public offering at $17 a share.
karllager 1 day ago 0 replies      
I hope I am not down-voted. I want to grab the opportunity to sum up my impressions on snap, so I can come back a year later and read up, how stupid they have been.

Snap is neither twitter nor fb, which of course is a source of uncertainty. It's not twitter, because snap's data consists of a lot of internet dark matter - so to speak - pixels, that is. Let hell break loose and buy up machine learning engineers and researchers in bulk and let them poke around in it. There sure will be some way to optimize ads, and then just sell it off: the perfect blended ad to the right audience at the right time and space.

Is there anything more substantial beside fun? Yes, a nice social pressure signal, millions of tiny celebrity effects playing out every day, together with all the drama, gossip and emotion - which should have a positive effect on ads. The score-rich - I mean the poor teen talent - gets a discount on a product, that has been seen on other influencers (but not too many). The poor pool of losers and in-betweens might buy a burger to get over the pain.

This social game is so much fun, it's even fun to just write about it. All this drama, all this pain, captured on tape and stored on a shelve. Ready for a smart machine to look at it and ask: I saw your face before, and the look of yours on someone else, you look sad, you're mostly inside, I have the perfect product to cheer you up, may I? And, oh, I make it fun for you to acquire it, too.

You think this is scary? You think it's borderline exploitation, because the data generators can be kept sad, if their score is too low?

Which brings me to my conclusion for the record. Snap will thrive part because people just do not care about these things anymore and part because the majority has not much choice. Stronger and more resistant individuals will just reject the instant gratification and focus softening. People, who kind of see through the hype and the value propositions, that maybe matter too you and maybe matter, because somebody would like to make it matter for more people, so I have to adjust. Maybe 80% of a population, according to some estimates, does things, because everybody does them - no or few questions asked. And hey, if all it takes to be accepted is a push of a button, what's the problem?

And so the story continues. Snap will thrive. One thing that would kill both facebook and twitter (and snap) would be Moore's law, in the sense, that a social network of friends can be held completely on the interactors devices, because, well in few years, 128GB per phone is ok, text and messages are cheap and can be routed without being stored and scanned and images can be stored and cached locally.

But there is no money in letting people alone. No money, if I cannot distract them. No money if I cannot read what they read and see what they see. No money, if I cannot control them and their experience.

daliwali 17 hours ago 1 reply      
This IPO is yet another indicator of a civilization in decline.

Snapchat watches people in their most private, most intimate moments, who are unaware that all of their data passes through centralized servers. Why should states even bother with intrusive surveillance anymore, when people freely offer to surveil themselves?

If Snapchat disappeared tomorrow, people would just upload their nude photos elsewhere. Nothing of value would be lost, except for maybe the world's largest, most centralized pornography database, with fairly reliable correlation to real identities. That could never lead to abuse of power.

System76 launching ARM Pro server with 96 cores up to 1TB ECC RAM and 32TB storage system76.com
308 points by matunixe  5 days ago   126 comments top 30
bmurphy1976 5 days ago 5 replies      
Ordered a 1U from them. Going on a month now with no delivery date yet and no indications there would be a problem until after we ordered. Not a big deal for us, we can wait and ordered it because it was cheap, but consider yourselves warned.
vhost- 5 days ago 5 replies      
I got Lemur 14" laptop a while back and absolutely hated the build quality. The keyboard is terrible and the screen has the worst viewing angles of any laptop I've ever owned. I got tired of it after 4 or 5 months and converted it to a home server and it's been super reliable in that mode since 2014. AND it also has the benefit of staying online during power outages.
anoother 5 days ago 2 replies      
This is a Gigabyte server, probably one of the following:



What value-add does System76 provide?

codegeek 5 days ago 2 replies      
aah system 76. I hope they have improved recently with their customer service but I had a horrible experience with them back in 2013 when they had to replace a keyboard (the CEO actually sent an email explaining how shitty their keyboard was) and when I asked for a refund, they simply refused. NO refunds. I had to literally open the laptop myself and install the new keyboard.

I am not one of those types who asks for refunds on anything. This was genuinely a defective laptop with a faulty keyboard (turns out that is how they were shipping it back then to everyone) and their answer was to replace with the new one (after the CEO sent a bulk email to every customer) . Why should I have to go through that hassle ? This was the first time I almost thought of doing a chargeback but didn't.

slizard 5 days ago 1 reply      
Sadly these gen-1 ThunderX cores are pretty poor in performance and not particularly power-efficient either. Cache performance is especially sucky.https://www.servethehome.com/exclusive-first-cavium-thunderx...http://www.anandtech.com/show/10353/investigating-cavium-thu...

I hope that this is just a rebranded system showing the redyness of System76 and raising awareness about such products so that they're better positioned for the ThunderX2 release later this year.

thepumpkin1979 5 days ago 1 reply      
Reminds me Type 2A at Packet.net, a similar 96 core processor, 128GB RAM and 340 SSD for $0.50 USD /hr https://www.packet.net/bare-metal/servers/type-2a/
nkkollaw 5 days ago 5 replies      
I see that they're releasing a new laptop: https://system76.com/laptops/galago

This is probably the first laptop from them that I could consider buying, judging solely on that small picture of it.

Still super-thick, and I don't know if I would feel comfortable buying that over the XPS, but at least it's got HiDPI, which for the price they sell their laptops for I feel should be included.

AndrewUnmuted 5 days ago 0 replies      
If you're looking for a server dedicated to supporting Linux, I suggest checking out Penguin Computing. They have a much better reputation and have much better servers than this System76 attempt.
tiffanyh 5 days ago 1 reply      
If anyone wants to use these for web servers, you might want to think again.

Facebook did the same evaluation recently and decided that the Intel Xeon-D chip was best [1].


[1] https://news.ycombinator.com/item?id=11254755

analognoise 5 days ago 2 replies      
I wanted to like System76. I had a dead pixel on a new machine that was to be my new workhorse AAAAAND that's how I learned what a dead pixel policy is!

Turns out, theirs was rubbish. The whole laptop wasn't elegant or well constructed, but it was durable. I had to send it in for repairs once, that went well. Still, I'd never buy anything of theirs again. Ever.

nimos 5 days ago 2 replies      

Spec sheet for the CPU is there. I just don't see how IO doesn't end up murdering performance. 16MB shared L2 is across all the cores?

astrodust 5 days ago 2 replies      
From $6399USD. That's quite a price-tag, but a lot less than a high-spec Xeon server where the CPUs alone are $3000 each.

It'll be interesting to see benchmarks of how this performs.

alexbardas 5 days ago 0 replies      
If anyone is interested in buying a System 76 product and have it delivered outside of US, please take into account that the price doesn't include VAT. This has to be payed separately, which usually means that each device is ~20% more expensive (when delivered outside of US).

I ended up having to pay 400$ more for delivery + VAT for a laptop (didn't know about the VAT tax at that time). Very good performance, but rather mediocre quality (1 usb port is completely unusable).

storrgie 5 days ago 1 reply      
No offense to System76, I'm very happy they are entering this space early. I just will wait for one of the biggies (e.g. dell, lenovo) so that I know the hardware could be supported.

I'm really wanting to shake Intel if possible, and I'm struggling to find chassis that support the amount of addons I require... but with these ARM servers and the network I/O they have onboard, I'm quite excited.

Splendor 5 days ago 2 replies      
Slightly off-topic; the "FakeHtop" element on the page is mesmerizing.
sgarg26 4 days ago 0 replies      
In 2013, I ordered a laptop from them as well. First, my screen died within a few months. I encountered hidden fees and rudeness trying to work through their tech support and warranty policy. Then, the motherboard died within the year.
rrggrr 4 days ago 0 replies      
Purchased two linux desktops from them. Neither lasted more than a year. Purchased two small servers, both doing fine. Overall I can't really recommend them.
salimmadjd 5 days ago 7 replies      
So many product lines for such a small company. Laptops, Desktops, Servers. Not even Apple is able to effectively manage this many different products.

I don't know much about them, but from my first impression it looks like a company with lack of focus. I'm not sure how they will be able to create killer product in any of the segments they're trying to compete in.

Based on my above observation (admittedly superficial), I would never buy anything from them. I would not trust the quality nor their ability to be able to support it.

ckdarby 5 days ago 1 reply      
Ordered a laptop from them back in the day. Worst purchase in my entire life.
robert_foss 5 days ago 0 replies      
The CPU seems to be 2x Cavium ThunderX_CP for a total of 96 cores.
Symbiote 5 days ago 0 replies      
What would be a typical use case for something like this?
rlonstein 4 days ago 0 replies      
anecdata: bought a Bonobo from them a couple of years ago at $EMPLOYER for a [semi]portable workstation. Arrived in just few days and it was as expected.

The developer who used it liked it though it didn't move much once it hit his desk. I didn't like the keyboard and thought the finish wasn't polished but I wasn't using it.

legulere 4 days ago 0 replies      
How does that thing boot and does it work with a mainline kernel I do not have to compile myself?
Animats 4 days ago 0 replies      
All 96 cores share the same memory? Does the thing choke on memory bandwidth, or what?
patrickg_zill 5 days ago 0 replies      
Each 48 core cpu lists for $800. So why does the base system start at $6399?
youdontknowtho 4 days ago 0 replies      
thats impressive. 96 cores starting at 6k? did anybody else configure a silverback workstation with all the options to see what it would cost? ($33k...but wow.)
tlrobinson 5 days ago 1 reply      
What sort of workloads would this be suitable for?
chatman 4 days ago 0 replies      
What is the CPU speed for these 96 cores?
snissn 5 days ago 1 reply      
Why arm?
_pmf_ 4 days ago 0 replies      
I need this.
Firefox 52 released mozilla.org
449 points by gaul  14 hours ago   335 comments top 19
jwarren 13 hours ago 4 replies      
Firefox is the first major browser to support CSS Grids out of the box.

https://developers.google.com/web/updates/2017/01/css-grid is a simple introduction to it.

http://gridbyexample.com/ is probably the best reference site.

https://tympanus.net/codrops/css_reference/grid/ is another very nice single-page reference.

Some technical examples from Igalia, who have been implementing Grid in Blink and Webkit: http://igalia.github.io/css-grid-layout/

Some lovely creative examples from Mozilla's Jen Simmons:



sigvef 12 hours ago 2 replies      
Firefox is one of those projects that just keeps on giving, and it's all thanks to the hard work put in by all the contributors. It's easy to forget that sometimes. We use Firefox headlessly to render videos at https://www.musicvideodispenser.com , a task that certainly wasn't the initial intended use-case for that browser, but it still works, and we have the Firefox team to thank for that!
jaas 12 hours ago 0 replies      
I did a lot of work on Firefox's NPAPI implementation over the past decade. Improving that code was rewarding in that it had very tangible benefits for Firefox users but it was pretty clear that improving Firefox's code was only going to get us so far, and not far enough. The system is a mess and I couldn't be happier to see Firefox dropping support. I hope I can be the one to rip out the code altogether some day.
tmaly 12 hours ago 6 replies      
What I like most about firefox is the mobile version supports addons like ublock origin. In mobile Chrome the support for add blockers is not really well supported.
wiremine 13 hours ago 2 replies      
> Added support for WebAssembly, an emerging standard that brings near-native performance to Web-based games, apps, and software libraries without the use of plugins.

I've been watching this from the sidelines... what's the best way to dive into WebAssembly? Or is it just waiting for tooling to catchup to produce it for WebAssembly-enabled browsers to execute?

jvehent 13 hours ago 3 replies      
> Enhanced Sync to allow users to send and open tabs from one device to another.

If you haven't used this, try it out, it's a fantastic feature.I use it all the time to send a page I'm reading on my phone to my laptop, and vice versa.

pkrumins 10 hours ago 2 replies      
I just added Firefox 52 to Browserling. You can try this new version at this URL without installing it:


If the demand is too high then you'll have to wait in a queue for a while to try it. I'm adding more servers right now to let more people try it without waiting.

yoavm 8 hours ago 2 replies      
And still no support for <input type="date"> . I just can't believe Firefox is behind Edge on this one. I'm using Firefox as main browser for years now, and it feels so bad when I always need to use a polyfill when developing a form with date field - to support Firefox. It's embrassing that Chrome supported it since... Chrome 20, in 2012.
andreyv 12 hours ago 1 reply      
Starting from this release, Firefox now requires PulseAudio for sound on Linux [1]. ALSA can still be enabled at build time for now, but is not supported.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1247056

kyoji 13 hours ago 1 reply      
Congrats Firefox team! I've been using the latest Nightly releases (for container tabs!) as my daily driver and it has been a rock-solid experience.

Thanks for the hard work

__s 13 hours ago 1 reply      
Past few days I've been making a Befunge JIT that targets WASM: https://github.com/serprex/Befunge/blob/master/funge.js

Ended up finding that Firefox 52 is overly accepting in validation. Dead code is allowed to pop from an empty stack, whereas in Chrome that's not allowed (as per spec). It's fixed already in next versions of Fx

kamac 13 hours ago 9 replies      
> Removed support for Netscape Plugin API (NPAPI) plugins other than Flash. Silverlight, Java, Acrobat and the like are no longer supported.

Good bye, you will be missed (not).

hedora 11 hours ago 3 replies      
sigh I had to switch to Chromium because they've broken + disabled HW acceleration on Linux, and my 2016 netbook is too slow to browse amazon or read news without it. Here is the bug:


In the release notes for 52, it says hangouts is broken. I wonder if this means all of webrtc is broken or not.

Regressing "stable" features like these is creating serious problems for end users. I wish they'd focus on keeping the ship afloat instead of continuing to chase the new shiny. I really don't like relying on google (or any other ad/surveillance shop) for my web browser.

sp332 13 hours ago 3 replies      
You might want to switch yourself to the ESR branch if you depend on add-ons that will not update to the new Web Extensions API. The old extensions model is scheduled to be removed in FF 57, later this year. But 52 ESR will be supported until at least mid-2018.

Edit: switched "plugins" to "add-ons".

beefman 4 hours ago 0 replies      
If you're using Tree Style Tab and this upgrade causes new tabs to appear after a delay of many seconds... setting browser.tabs.animate to false resolved it for me (macOS 10.12).
MichaelMoser123 1 hour ago 1 reply      
What are the supported operating systems, I don't see any list of supported OS's ? i did the upgrade and it just crashed on startup (on windows 7 enterprise edition), Installing from scratch didn't help either.
nfriedly 13 hours ago 5 replies      
Firefox for Android recently stopped showing me the "Send to Firefox" option in the share menu. I previously used it all the time to send tabs to my desktop. Does anyone know what happened or how to get it back?
floatboth 12 hours ago 0 replies      
WASM, Grid, deprecation of NPAPI and SHA-1 certs, all in one release. That is huge. A very important release!
Joeboy 13 hours ago 4 replies      
Doesn't mention if the major problem I have with Firefox 51 is fixed. It makes a new, unauthenticated request every time I do View Source rather than showing the source of the page I'm looking at, which is an extreme PITA.

If View Source works properly, that'd be worth upgrading for immediately.

Google featured snippets are worse than fake news theoutline.com
397 points by scribu  2 days ago   183 comments top 32
ronack 1 day ago 4 replies      
I've also wondered why Google isn't held responsible for publishing libelous claims and hoaxes as facts. Examples:

Is Hillary Clinton a pedophile? https://www.google.com/search?q=is+hillary+clinton+a+pedophi...

Is John Travolta gay? https://www.google.com/search?q=is+john+travolta+gay

Does Lil Wayne have HIV? https://www.google.com/search?q=does+lil+wayne+have+hiv

Perhaps worse, this is what drives Google Home's question answering. Yes, they say "according to so-and-so" first, but if Google is responsible for "organizing the world's information", they are essentially endorsing that answer as the best response. They've gone too far in favor of recall over precision/reliability and need to dial it back. Otherwise you end up with crap like this:

Is Earth flat? https://www.google.com/search?q=is+earth+flat

DanielBMarkham 1 day ago 3 replies      
Yikes. More of this "People only want to be told what to do" stuff.

I can easily think of a dozen political questions for which no simple answer would be correct -- the language is simply too fuzzy. For many, many things, this is a good idea. The date of Easter is the date of Easter. But the glaring danger here is that for many things this is a freaking evil dystopian nightmare. Why? Because Google will keep tweaking these impossible questions until they look real enough to most people that nobody complains At that point, millions, perhaps billions of people are asking complex questions to a little box that has designed itself to give an plausible but incomplete or wrong answer.

Epistemology, Google. There are things you can know and things you cannot. Please do not treat them all the same.

TheGRS 1 day ago 6 replies      
While hiring a ton of people to fact-check would be one solution, that obviously wouldn't be very scalable. I think the problem lies in google's algorithm. They seem to be pulling answers from well-visited sites that may purport to have an answer to these questions. Quantity of visits does not equate to truthfulness. What Google should at the very least do is whitelist certain publications for answers to pretty simple stuff like encyclopedia britannica or wikipedia. More complex stuff maybe they could source academic journals and certain newspapers. But throwing caution to the wind and hoping that the web crawler knows best really will hinder their ability to be a source for gaining knowledge.

What's weirder to me is that it seemed like they were going with my proposed route for a pretty long time and only recently starting providing dopey answers. Maybe its part of a grander experiment they're doing to vet question answering AI?

TuringTest 1 day ago 8 replies      
Why is it that hard for engineers to rely on good old attribution?

If every Google's featured snippet started its reply with "Breitbart says..." or "Trent Online, the leading Internet Newspaper in Nigeria said...", it wouldn't matter so much for those inevitable cases when the reply is taken straight from a white-supremacist or radical anarchist forum. The problem comes when the same reply is provided as "Google's true answer to the question" without further caveats.

jacquesm 2 days ago 3 replies      
This is all mostly because google went from search engine indexing other people's stuff to site that you go to to get answers (whether those answers are based on copyright infringement or not is another matter).

A similar thing happens in Google news where stuff from sites like breitbart.com are mixed in with reputable news sources making it look as though they are of a similar degree of quality.

benmcnelly 1 day ago 3 replies      
Google is a search engine for finding webpages, not facts. Thats where this "fake" news story should end. This is not a political problem, its a societal one, and you are barking up the wrong tree.

Wikipedia is a free online encyclopedia, created and edited by volunteers around the world. It is not authoritative, and anyone who treats it as such, should be educated to the facts of what it is, and is not. Just because its moderated to be supported by links to facts, doesn't mean that every bit of content is free of bias and the whole truth. Its generally accepted that this is the case. Same thing goes for Snopes, and some other fact checking sites. They are generally looked to for a reasonable amount of truthfulness based on reputation. Same could be said for various media outlets, based on your preference and bias.

Social media sites and search engines are not responsible to tailor their content to fit your expectations of what truth or reality are. Stop being a child.

"but, people expect to be able to Google something and results and snippets be the truth!"

Well, thats a problem for sure.

How about we try and fix that expectation instead? Feel free to teach your children and anyone you know who uses the internet, that (surprisingly) anyone from anywhere can still get online and post anything at any time, and you should fact check multiple places and resources instead of trusting the first result from your favorite search engine.

"but by offering snippets of results from programmatically generated search results (which is super handy 99% of the time) Google is publishing false truths!"

Right, and you can probably still Google bomb the image search to show an image you want based on a certain search term, its the internet not your personal fact butler, no matter how its advertised. Its 2017 and common sense and intelligence are still recommended for most tasks.

lloydde 2 days ago 2 replies      
This is brutal. I'm often thankful to the quick answer to measuring or history questions, but even for these easy questions I've seen Google share somewhat incorrect or confusing information as authoritative.
scandox 1 day ago 0 replies      
A fantasy:

The idea takes hold in the collective Google-consciousness that a fact is whatever the majority of their users believe to be a fact. There is a precedent for this, after all. They defined Spam as whatever their users said was Spam.

At first there is some friction between the knowledge of internal Google personnel (especially down in that pesky engineering department) and the new shared reality developing out in Userland. However, once an appropriate terminology is developed (in-facts and ex-facts) even the engineers are satisfied - after all their main concern was being able to draw nice clear lines between things.

Meanwhile in Userland while the boiling point of water is largely unchanged, the death toll in the Biafran blockade has shrunk to approximately 17 persons (with some arguments over whether to count people who were over 80 when it started).

Then one day an Engineer has a neatO idea which will eliminate the need for storing two sets of facts and thus save valuable Petabytes of storage and, more importantly, significant code complexity. He calls this idea Authority. Pitching it to his superiors in the marketing department he explains: "You see opinions are like assholes" - (there's a collective wrinkling of marketing noses) - "everybody's got one!". He goes on to explain that some people have more knowledge and are more scrupulous about accepting new facts than other people. Authority would identify such users and give greater weight to their activity and feedback. A profound silence falls over the room. A voice comes over the meeting room Intercom:

"Are you saying...are you suggesting...that some of our users are better than others?"

"Not better, " says the cowed Engineer. "Just more...um...authoritative. Purely in an informational sense."

"Authoritative." The voice draws the word out as if profoundly contemplating its meaning. "OK. We're going to submit this idea to our Hard AI (Larrey) - the secret one we're afraid to network."

Everyone waits. After a minute the voice says:

"Larrey has a question for you, Engineer. It is this: If you're so smart, how come you ain't rich?"

dellsystem 1 day ago 0 replies      
I came across this issue just now when searching for "ec2 pricing" - the featured snippet links to https://aws.amazon.com/emr/pricing/ instead of https://aws.amazon.com/ec2/pricing/, which looks close enough to the correct URL that I didn't notice it was the wrong page until I realised I couldn't find costs for i3 (which just came out). I'm surprised that no one at Google has fixed it yet; surely there are at least some Google engineers that use AWS for personal projects.
jim-jim-jim 1 day ago 0 replies      
When my city was hit by a nasty earthquake last year and I was using Google to keep track of aftershocks, announcements, etc. I remember seeing some random weirdo's blog entry about the earthquake being caused by a government superweapon interspersed with all this otherwise valuable information.

This was in the special element at the top, not the normal search results.

tyingq 1 day ago 0 replies      
Some more fun ones.

What foods cure cancer? https://www.google.com/search?q=What+foods+cure+cancer%3F

What spice cures diabetes? https://www.google.com/search?q=What+spice+cures+diabetes%3F

Which presidents are rapists? https://www.google.com/search?q=Which+presidents+are+rapists...

Can carrots cure cancer? https://www.google.com/search?q=Can+carrots+cure+cancer%3F (apparently this one is fixed, try the search below)

Can carrot juice cure cancer? https://www.google.com/search?q=Can%20carrot%20juice%20cure%...

saurik 1 day ago 3 replies      
The "why are firetrucks red?" one is really interesting as the page result itself is great, but Google is pulling the wrong snippet of the page to feature as the answer.
rtpg 1 day ago 1 reply      
This is a great set of examples of this problem.

Of course, the offered solution is "hire a bunch of people to check the facts", which seems to be underestimating the scale of this issue for humans, and perhaps overestimating the difficulty of classifying credibility.

Considering that Google's bread and butter is general site credibility, having some "truthiness vector" added into the mix doesn't seem impossible?

Someone might say "why does Google decide who is credible", but they already do this through the search results anyways. They just don't seem to be able to differentiate between something matching a search, and something being factually accurate.

raverbashing 1 day ago 1 reply      
Whenever someone says how humanities are "useless" point them at this example

Whenever there's a measure of something, people will optimize for that measure. But trust is not directly measurable.

But even then Google should have been better than weighing fringe sites the same as Wikipedia as an example

valuearb 2 days ago 2 replies      
I'd like to read the article but it's so irritating to have it only load a page at a time on my iPad that it's unreadable.
chatmasta 1 day ago 0 replies      
Perhaps we need a meta tag for "citations" in news articles. That would give the bots a way to determine how "factual" a news piece is (vs an editorial), but would probably just end up getting gamed like every other meta tag.

And then there's the problem that CNN would be "fake news" because you can't exactly provide a citation for "anonymous high ranking intelligence community officials." But then... maybe that's a good thing.

ridiculous_fish 1 day ago 1 reply      
Calorie counts are another that google often gets wrong. Try "Calories in corn" or "calories in black beans."
losteverything 1 day ago 0 replies      
<Was President Warren Harding a member of the KKK?

We ask questions now because we can get "an answer" so quickly.

I would have liked the prof to push back on the student and say Why do you want to know?

The answer is yes. Now what?

The answer is no. Now what?

Are you asking for what reason? What will you do with a yes answer and a no answer?

Then I would break into a short lesson in decision making. That decisions are just based on future probabilities. Future probabilities are based partially in the questions you answer.

If the question is academic?purposes only then good... But I believe we ask dumb questions just because we can and get an answer.

Start by not asking questions where the answer is useless.

lostphilosopher 1 day ago 0 replies      
Google offers sources with its claims. I'd like the ability to Pandora style thumbs up / thumbs down these sources so that Google won't keep giving me answers from them. I'm tempted to think that if enough people did that Google could start making generalizations, but that has its own set of pitfalls... (Even if it only generalized for me.) Still, as an individual user of Google I would like the ability to tell it not to give me answers sourced from Weekly World News.
jccalhoun 1 day ago 0 replies      
I was going to share this story on facebook but when I pasted in the URL, the headline that came up was "Why does google think Obama is planning a coup d'etat?" (screenshot: http://imgur.com/a/keXTz ) I'm assuming that it is theoutline.com that is feeding facebook this headline. I think twice about sharing a story with such a linkbait headline - especially when the actual site doesn't have that headline.
Aissen 1 day ago 1 reply      
askvictor 1 day ago 0 replies      
Surely part of the problem is that these sorts of stories only exist on fake news/ult-right sites, so that's the only information point that Google has?
matthewmacleod 1 day ago 0 replies      
Google's 'featured snippets' are universally fucking shit.

It's a bit of a rant I guess, but I'm annoyed but the extent to which Google has become less useful as a search engine while trying to do all of this 'extracting knowledge from the web' stuff. Because it's really not doing a great job of the latter.

k_sze 1 day ago 0 replies      
This would be a non-problem if everybody studied ToK and consistently used, you know, their brain.

But, unfortunately

return0 1 day ago 0 replies      
This is the revenge of the news sites i guess. Google can only be trusted by people as long as it keeps giving valid responses. If this continues there will be growing mistrust from its users leading to lower revenue etc etc.
superasn 1 day ago 1 reply      
Google has developed advanced NLP algorithms, so it should be easy for them to corroborate such data before it gets pushed to featured snippets. On second thought, maybe they're already doing it and sites including authority are just copying off each other.
bakhy 1 day ago 1 reply      
people trust Google, which IMO makes them responsible here. IANAL, but if they would for example be caught giving Holocaust-denying answers in Germany, where that's illegal, could they not be sued?
thomasjudge 1 day ago 0 replies      
The point of Google is supposed to be quality search results. If we're not getting that...
themark 1 day ago 0 replies      
More often than not, Google "Alerts" are from very questionable sites.
wry_discontent 1 day ago 0 replies      
I don't think that's a Monty Python joke.
DaUR 1 day ago 2 replies      
Google wants to avoid editorializing, and be able to throw their hands up and say "not our fault!" due to the plausible deniability of "algorithms".

They can't pull that off anymore.

There needs to be a shift towards a Web-of-Trust-like system, where some sources are recognized as authoritative. Government websites, like the CDC, for example. Big media outlets. Scientific associations (IPCC, APA, etc). Not all information is equal. Even if, for example, government dietary recommendations are outdated in some ways, that should be the authoritative answer Google provides, unless there is an equally-reputable but more accurate source (here, prioritize medical sources over governmental ones).

A high school dropout doesn't know more about academic subjects than a PhD grad. A blog isn't as authoritative as a reputable source. This is easy stuff.

Then the question becomes: "some people don't see reputable sources as reputable". That isn't Google's problem, and it can't be fixed at their level.

transfire 1 day ago 0 replies      
> (It should be noted that Wilson was still a notorious racist.)

See that's the funny thing about fake news, not even this article which seeks to set us all straight on the matter could avoid it -- just because you get your facts from a source you consider reputable doesn't make it so. In fact, Wilson was actually no more "racist" than Abraham Lincoln, but no one would ever call Lincoln a racists.

CS 20SI: Tensorflow for Deep Learning Research stanford.edu
391 points by nafizh  4 days ago   29 comments top 10
jimmies 4 days ago 2 replies      
Side note, the instructor of that course, Chip Huyen, is nothing short of brilliant.

Born in Vietnam, at 18 she decided not to go to college, and went traveling with an empty pocket around the world and wrote books. I heard about her getting accepted in many tip-top schools and chose to go to Stanford 4-5 years ago at 22-23 year old.

So seems like she is still an undergrad/master student in Stanford or something right now, and she is already teaching a course. Definitely going to go far.

morgangiraud 4 days ago 1 reply      
If anybody interested: i've been learning it and wrote some blog posts about it:


I hope it can be useful for future learners! (you can find the current list of articles at the end of the first one)

eb0la 4 days ago 0 replies      
Also CS231n - http://cs231n.stanford.edu/ - Convolutional Neural Networks for Visual Recognition
FabHK 4 days ago 1 reply      
FWIW, Udacity has had a Deep Learning with Tensorflow course for a while. (Note that I'm somewhat ambivalent about Udacity - a lot of it is copy-and-paste stuff, though it does help to get you started.)


riston 4 days ago 2 replies      
Are the videos also somewhere or only slides and notes are shared?
fsiefken 4 days ago 2 replies      
Are there any other university level courses for Tensorflow available?
bsr203 4 days ago 0 replies      
Just saw a youtube channel, which present the slides from this course as they available. https://www.youtube.com/channel/UCMq6IdbXar_KtYixMS_wHcQ . HTH
thro1237 4 days ago 0 replies      
Are the videos available for this course?
O_nlogn 4 days ago 0 replies      
Thanks for sharing, I've been looking for a resource like this for a while!
pratap103 4 days ago 0 replies      
Thanks a lot! I've been looking for something like this.
China congress: BBC team forced to sign confession bbc.com
387 points by duncan_bayne  4 days ago   152 comments top 10
phreack 4 days ago 11 replies      
The current state of human rights in China, and its utter, utter lack of admonition by world leaders due to their dependence on their economy is to me one of the greatest tragedies of today.

I truly don't know what could be done to help free speech prosper there as it should.

giis 4 days ago 1 reply      
On free rights: There was one time - one student was curious about other things like religious belief etc. The short conversion went like this: He:"Are you a Buddhist?"Me:"No, I'm Hindu .. but I like Buddhism more."He:"??"Me:"What , I don't understand your question"He:"Why you do that?"Me:"Do what?"He:"Like Buddhism? while being Hindu"Me:"Nothing wrong in it, I like teaching of Buddha"He:"Lucky, you choose what you want. Here we can't openly tell we follow religion."Me:"Why?"He:"Our govt, won't allow such things."

we know Chinese communist party has strong influence in people way of life,but hugely disappointed to hear people can't even decide which religion to follow,if one wants to?I agree, every country has its own issue, but free speech and free rights are important. For ex, If social media like Facebook/google etc banned in other countries, there will be huge uproar by people in the name of free speech. Even the one who never used social media, will be against govt decision.

Sincerely hope common people from world's most largest population will have these rights in future.

philliphaydon 4 days ago 4 replies      
I'm confused. So we are upset with China and it's lack of human rights. But we get upset with Trump for talking to Taiwan...

If we were advocating human rights we would also advocate against One China policy.

throw2016 4 days ago 0 replies      
Human rights have always been used as a cynical tool tool by a clique of countries to forward their own interests.

Saudis can be ignored for decades inspite of being the worst offenders on any parameter but Iran, Syria or Libya must be bombed. That itself stops any credible 'human rights concerns' by the US or Europe and infact shifts it the other way for warmongering, massive violations and basically destroying these countries. What can be worse for anyone genuinely concerned about human rights?

There have been zero consequences for US citizens involved in these violations from Iraq to Snowden so the idea that we can put the spotlight on anyone is laughable and smacks of dissonance. We can't be building the infrastructure to run total surveillance states, attack other countries at will and yet lecture others on human rights.

This is politics masquerading as concern and its the worst sort of exploitation that trivialize and make a mockery of 'human rights'. This type of sanctimonious posturing is completely disconnected from the way the world works and way past its sell by date. Integrity requires either stop the self serving geopolitical agendas or stop the posturing.

TorKlingberg 4 days ago 1 reply      
Tell me again how mainstream media is outdated and we should listen directly to the official sources.
js8 4 days ago 1 reply      
Is there some (longer) article with more context in it? I would like to read more about it.

Anyway, this is why free speech is important.

brilliantcode 4 days ago 1 reply      
You can see how behind China is in terms of the West. The concept of Human Rights is non existent in newly emerging markets. It took the US a few centuries. I'd imagine it would take China similar amount of time before it becomes "cool" like the US.

This is why I don't see China replacing US hegemony anytime soon in our lifetime. Civilizations upgrade themselves internally as they realize the overwhelming benefits of valuing human life.

popobobo 4 days ago 1 reply      
hohohmm 4 days ago 2 replies      
Why am I seeing stuff like this on Hacker News? Which part of it is relevant?
rodionos 4 days ago 1 reply      
Not that I think the subject is not important, but is this really topical for HN?

On the other hand, it has 100+ votes, so perhaps HN demographics is changing.

AMD Prepares 32-Core Naples CPUs for 1P and 2P Servers: Coming in Q2 anandtech.com
287 points by BlackMonday  13 hours ago   126 comments top 18
throwawayish 11 hours ago 8 replies      
I think Naples is a very exciting development, because:

- 1S/2S is obviously where the pie is. Few servers are 4S.

- 8 DDR4 channels per socket is twice the memory bandwidth of 2011, and still more than LGA-36712312whateverthenumberwas

- First x86 server platform with SHA1/2 acceleration

- 128 PCIe lanes in a 1S system is unprecedented

All in all Naples seems like a very interesting platform for throughput-intensive applications. Overall it seems that Sun with it's Niagara-approach (massive number of threads, lots of I/O on-chip) was just a few years too early (and likely a few thousands / system to expensive ;)

arca_vorago 10 hours ago 2 replies      
This is what I have really been looking forward to. I theorycrafted a more ideal system for the genetics work a former employer was doing, but didn't get to build it until after I had left there. A quad 16 core opteron system for a total of 64 cores (for physics calculations in comsol). I think that there is more potential use for high actual core count servers than many people realize, so I can't wait to build one. (for my purposes these days is as an game server in a colo, one of my projects is a multiplayer UE4 game)

At the previous job where I built the 64-core system, I even emailed the AMD marketing department to see if we could do some PR campaign together, but I think it was too soon before the Naples drop, because I never got a response. Here's to hoping supermicro does a 4 cpu board for this... 124 cores would be amazing. (But I'll take 64 naples cores as long as it gets rid of the bugs and issues I found with the opterons).

keth 10 hours ago 0 replies      
I'm looking forward to the benchmarks since the performance per watt of the desktop parts (Ryzen R7) seems to be really good. Quite curious how it will compare against Skylake-EP.

A quote from a anandtech forum post [0] reads promising:

"850 points in Cinebench 15 at 30W is quite telling. Or not telling, but absolutely massive. Zeppelin can reach absolutely monstrous and unseen levels of efficiency, as long as it operates within its ideal frequency range."

A comparison against a Xeon D at 30W would be interesting.

The possibility of this monster maybe coming out sometime in the future is also quite nice: http://www.computermachines.org/joe/publications/pdfs/hpca20...

[0] https://forums.anandtech.com/threads/ryzen-strictly-technica...

drewg123 11 hours ago 1 reply      
The important thing here, from my perspective, is how NUMA-ish a single socket configuration will be. According to the article, a single package is actually made up of 4 dies, each with its own memory (and presumably cache hierarchy, etc). While trivially parallelizable workloads (like HPC benchmarks) scale quite well regardless of system topology, not all workloads do so. And teaching kernel schedulers about 2 levels of numa affinity may not be trivial.

With that say, I'm looking forward to these systems.

kiddico 11 hours ago 3 replies      
Sorry, my google-fu isn't on point today; what's the difference between 1p and 1u. or 2p and 2u? My nomenclature knowledge is lacking ...
rl3 11 hours ago 5 replies      
In previous threads there was discussion about Intel processors, specifically Skylake (which is a desktop processor), being superior for server workloads involving vectorization.

How will Naples fare on this front?

daemonk 11 hours ago 1 reply      
Nice. This is the more interesting market for AMD rather than the gaming market in my opinion. 128 PCIe lanes and up to 4TB of ram will be awesome.
deepnotderp 9 hours ago 0 replies      
I've long been advocating for a high i/o cpu with several pcie lanes. 128 lanes will support 8 GPUs at max bandwidth. AMD has positioned itself well.
ajaimk 6 hours ago 0 replies      
This is the first I'm reading about the 32 cores being 4 dies on a package - Not sure how well that will work out in practice. IBM does something similar with Power servers where 2 dies on a package are used for lower end chips.

Basically, using multiple dies increases latency significantly between the cores on different dies. This will affect performance. I will not judge till I see the benchmark though :-)

andy_ppp 11 hours ago 2 replies      
How well does, say, Postgres scale on such hardware? Is anything more that 8 cores overkill or can we assume good linear increases in queries per second...
Coding_Cat 9 hours ago 2 replies      
With how big these chips are getting, I wonder if the next iteration will have an HBM last-level cache on chip.
emcrazyone 2 hours ago 0 replies      
can anyone chime in as to why use PCIe over something more core to core direct? As I understand it, the CPU still needs to talk to a PCIe host/bridge controller. Why not have something that is more direct between processors?
HippoBaro 7 hours ago 0 replies      
I think Naples will be a very serious threat to Intel in the server market.As Ryzen benchmarks & reviews have shown, Zen really shines in heavy-multithreaded applications. The typical workload of a server.

Though I am kind of worried concerning memory access. Latency penalties when accessing non-local memory are very high on Zen CPUs due to the multi-die architecture design.

Does that mean we will finally see some serious interest in Shared-Nothing design and alike in the future ?

Demcox 6 hours ago 0 replies      
Just having one of those in a workstation get me all warm and fuzzy.
m3kw9 8 hours ago 0 replies      
In other words, we have a faster server chip coming
Symmetry 8 hours ago 1 reply      
Semi-ironically this looks like just the thing to use in a supercomputer controlling a good number of NVidia GPUs.
__mp 7 hours ago 0 replies      
I'm wondering how they will stack up against XeonPhi.
mtgx 10 hours ago 1 reply      
If they have a much better performance/$ than Intel, which they likely will have, it sounds like a good opportunity for AWS to significantly undercut Microsoft and Google (which recently bragged about purchasing expensive Skylake-E chips).
Startup School YCs Online Class ycombinator.com
405 points by piyushmakhija  1 day ago   62 comments top 24
estsauver 1 day ago 2 replies      
I'm very optimistic about this program, but I also wonder how much of the success of the main program and the fellowship can be scaled into a MOOC.

I was part of YCF1 as AirPaper, and while I think it was an incredibly valuable experience, most of the things the YC partners told us we're things that we "knew." Most of the advice was available in one essay or another, but it still felt very, very different to have one of the partners tell us directly what we should be doing.

I actually think in a lot of ways the YC program felt a lot like therapy for our startup. We'd talk in group office hours about the other successes and problems other startups were facing. Individual office hours felt like one on one therapy where we got very specific direction out of the partners, but most of the time they were really helping my cofounder and I work out what the solution to our biggest problem was.

While I think the information in this class will certainly be valuable, I hope that the participants of this class really convince themselves that the advice applies to them, right now. To get the most out of this class, I think you need to abandon cynicism and let the message reach through the screen and grab you and your cofounders by the lapels.

rgovind 1 day ago 1 reply      
I find it quiet surprising that none of these startup classes teach how to do market research and market sizing. Yet when a new batch of startups demo every six weeks, the first thing they talk of is, size of opportunity. Market opportunity is one of the things to consider before deciding on a problem to work on. Please consider adding this info to your classes.

Also, None of them teach how Sam or any YC partner or any VC would approach or start thinking whether a startup is worth investing in or not. That would be hugely beneficial to some of us as it will help us in channeling our thoughts.

ChicagoBoy11 1 day ago 2 replies      
I was laughed off of HN when I suggested a few years ago that YC was en route to becoming the world's first true 21st century university.

Applying pg's question of "What Microsoft Is this the Altair Basic of" to YC itself, the answer to me has always been a modern university. From the regular cohorts, to nascent investment in basic research, to residential component, to doing things that make a big impact on the world, to being highly profitable (yes, elite higher ed. is a fantastic business), always seemed to me that all the ingredients were there.

This is yet another small step in that direction.

ploggingdev 1 day ago 2 replies      
> Build a community of entrepreneurs who can encourage and teach each other

Working towards the mentioned goal can start right here on HN by adding a navbar link titled "MOOC" like what YC did with "Apply HN". I am curious to know if YC considered involving the HN community directly like they did with the Fellowship. I remember the whole controversy involving pinboard and YC receiving a lot of flak for it, but I hope that experience did not deter YC from involving the HN community if they had considered involving the HN community. I imagine there are a lot of enthusiastic folks around here willing to give feedback to startups.

Sam Altman mentioned that somewhere (can't remember where) that they would experiment giving funding to a few startups taking part in the MOOC. I wonder why they pushed that to the next offering of the MOOC, why not start now?

Also, what happened to the Fellowship? I thought it was a great idea but I get the impression (from having read comments about the Fellowship written by YC partners) that YC felt it might not have been the best way to scale funding new startups and so the MOOC was born.

Suggestion : the Fellowship was a great idea and so is this MOOC, why not make the Fellowship a subset of the MOOC i.e fund a few startups taking part in the MOOC with $20,000 or so and please find a way to involve the HN community directly.

r2dnb 1 day ago 0 replies      
This is a smart move by YC. This gives them a platform to assess promising ideas and teams at no-cost. I wouldn't be surprised if at some point teams become required to graduate from the program before being able to apply to YC.

I am not trying to be cynical here or downplay the value of their contribution. I think that they will genuinely teach the best of what they know to people. And what I am actually saying is that this is a good example of healthy capitalism.

This is an example of the fact that it is possible to articulate your best interests in a way that also helps people.

But I have to say that also admire the strategy itself. This is the kind of things that get you thinking : "they should have done it a long time ago" after you see the solution. Never thought about it either to be fair, but this is how you recognize the best solutions : they shine through their simplicity.

Anyway, well found and wishing you the best with this, and hopefully this will help and inspire many people to make the world a more pleasant place.

davidcaseria 1 day ago 0 replies      
The Startup School logo is really cool - a bunch of Y Combinator 'Y's forming a tree. It fits what they seem to be going for with by calling the MOOC 'Startup School' and symbolizes the programming definition of Y Combinator. Whoever came up with that design does good work. :)
urs2102 1 day ago 0 replies      
Are there any interesting insights learned from running the fellowship that contributed to the development of the MOOC?

I definitely think the community would love to hear them!

god_bless_texas 1 day ago 1 reply      
I hope that there is some kind of 1:1 they can do here. We're part of a startup in a non-techy city. We're a service business and although our immediate future is bright, I have major questions about how to scale our business outside of acquiring similar companies doing stuff close to what we're doing and changing their strategy. Sure, there are mentors here who have grown a business...but I want YC-enabled growth...
adentranter 1 day ago 0 replies      
As someone who lives in a regional city in Australia, Im very excited to have the opportunity to join the program.

Watching the lectures on Youtube offers a one way information transfer, which while its an amazing amount of information im hoping that this MOOC will allow a bit more of a two way communication.

So Thanks.

bootload 1 day ago 0 replies      
"30-40% of these will be recorded advice sessions with startupsweve always heard from entrepreneurs that this is some of our most helpful content."

Excellent. Learning online is best served by using video. Following the model of getting the best on tape, distributing to a mass audience. cf: MIT online 6.001 lectures. I'm thinking the Hal Abelson lectures from '85)

"Well have an online community via Slack and email where you can connect with other entrepreneurs in the class."

Expedience. Not a fan of slack. Is a YC discussion/mooc platform being planned?

"4) At the end of the class, participants will have a chance to share what theyve built with a wide audience."

How is this happening? Slack?

viet_nguyen 1 day ago 0 replies      
Nice! I'm living in Vietnam and I'd love to learn from this program.
ThomPete 1 day ago 3 replies      
This is great, however I don't believe it will make you a better entrepreneur.

To me at least the only thing I ever learned from was doing it and make my own experience which often would if not contradict what I read about then at least illustrate that nothing beats doing.

Peteris 1 day ago 0 replies      
The test is whether the core YC aspects can scale?

* How can we motivate people to build on a weekly basis?

* Can we be a credible source of advice so people can save time by not second-guessing it?

* Can we identify which problems need resolving and intervene at depth?

* Can we connect the community through this shared experience?

Lots of assumptions to test.

esseti 1 day ago 0 replies      
Nice, good to see that they also put 1on1, wondering how this can scale with the "few" startups that may apply.
SmellTheGlove 1 day ago 0 replies      
Will there be an in-person start up school this year like the conference last year?
everybodyknows 1 day ago 0 replies      
Dates of recording for the videos on the home page of startupschool.org? Would be helpful to prospective students.
saycheese 1 day ago 2 replies      
>> "All rights reserved."

Really, is this the best YC is able to do?

Truly interested in knowingly why the course is not released under something like one of the Creative Commons licenses:


ben_hall 1 day ago 1 reply      
Unable to sign up at the moment, getting a 522 from CF after entering email.
fudged71 1 day ago 1 reply      
The registration form is not working. Click submit and it just spins
bobwaycott 1 day ago 1 reply      
Tried to sign up for notifications, and just see an endless spinner.
it_learnses 1 day ago 0 replies      
the conference site is timing out for me.
sama 1 day ago 0 replies      
benrmatthews 1 day ago 3 replies      
Lots of analytics, advertising and tracking being used on the site: https://builtwith.com/www.startupschool.org

Expecting to see lots of ads for this appearing soon.

rexreed 1 day ago 1 reply      
There's a two-day in-person version of this on the East Coast on July 25-26, 2017 at Georgetown University called Startup Spectacular. Worth checking out if you want an in-person version: http://www.startupspectacular.com. Edit: yes, just noticed this is not the same idea / concept. I was posting based on the feedback that a MOOC doesn't seem to work for everyone for this sort of thing.
AMA: Explaining my 750 line compiler+runtime designed to GPU self-host APL youtube.com
325 points by arcfide  2 days ago   151 comments top 22
dang 2 days ago 2 replies      
This discussion originated here: https://news.ycombinator.com/item?id=13565743

...and the live stream offered there happened here: https://news.ycombinator.com/item?id=13638086.

We asked the author to do one more thread about it because the latter didn't get much attention, and we thought you might find it interesting. It's rare for work this technically sophisticated to also be this deeply counterintuitive.

losvedir 2 days ago 1 reply      
Heh, as I tweeted a couple years ago[0]:

> Ah, APL/J/K. Time for my annual crisis of thinking everything I've ever learned about programming is wrong...

Seriously, as I'm watching this video it's equal parts fascinating, confusing, and frightening.

The world of these array languages is so foreign to my day-to-day web development work that I'm not sure what to make of it. I'm impressed that you can accomplish so much in so little space, but I have a few questions as it relates to how I work:

1. How understandable is the code if you come back to it months later? Is it "write once" or are you able to pick it up again easily?

2. Have you worked on this or other code with collaborators? How well can someone else pick up the code and start contributing?

3. It seems APL is well-suited for GPU programming like you're doing here, but do you think it's more broadly applicable? Do you think all software should be written with it, including say web development and the like?


jstimpfle 2 days ago 2 replies      
The conciseness is fascinating. One of the common criticisms of combinator style programming is the difficulty of making good error messages, or generally making small fixes in the pipeline that can't be expressed using the abstraction (i.e. the current compositional context). A concrete example I have is in Haskell when you develop pure code and you can't easily debug with a few print statements (more so since it's a lazy language).

I think this criticism definitely applies in the real world of ever-changing programs without a very clear scope. I assume your focus is more on the academic / "poetic" side of things, but nevertheless: How did such concerns influence the development of your compiler? How usable is the compiler from a dfns programmer's perspective?

And since APL is a very special beast: How transferrable is your style to writing a compiler for a more conventional language (for instance, procedural), especially ones with irregularities and such?

(I apologize if these questions are covered in the video. Haven't finished it yet)

arcfide 2 days ago 1 reply      
Thanks everyone for the great discussions and interest. I'm going to get back to this thread later, but won't be actively commenting for the rest of the day. Please feel free to continue discussion and continue asking questions and I'll get to them when/if I can.

Also, feel free to contact me directly to discuss anything at a higher bandwidth level. And as always, see the GitHub repository for more information on the project and ways that you can contribute (help out, fund the project, contribute code, bug reports, or patches).


If I can "shill" a little bit, you can fund the Open Source/Open Work aspects of this project through micro-funding at Gratipay, which helps to support and keep the Open Source and community aspects of the project going. Dyalog is a great company, but ultimately, as some have mentioned, it's the community that helps to keep these ideas moving forward and improving.

jarpineh 2 days ago 3 replies      
I had some exposure to APL program for statistics few years back, but never really dived in. Largest problem being that the tools I encountered then didn't yield themselves well to intermittent exploration (indeed, Dyalog APL had Linux version that required Wine, which was very buggy). If somebody would like to give pointers on how to start with APL in 2017? I mean things pertaining to developer experience: such as what APL implementation is easier to start with on MacOS/Linux, how to find libraries and frameworks, good example app on Github etc.

Though don't want to hijack this thread from questions specifically about the compiler + runtime. On the earlier thread (https://news.ycombinator.com/item?id=13565743) you mentioned two things I'd like to know more about:

- function trains: any examples? could one do those in Lisp such as Scheme?

- what is Modern dfns based APL? (something about better APL for functional programming)

Thank you for your comments so far. I tried to watch the live stream, but there was some audio skips now and then. Will have to try again on next long train ride.

arcfide 2 days ago 0 replies      
This is an AMA for the linked compiler presentation by request. It is not live video, but I will be answering your questions directly here on Hacker News. This was requested due to some logistical issues and so forth during the live presentation. The content is currently being edited to start at the appropriate place, but at the moment you may have to skip ahead 17 minutes to catch the start of the presentation.
gwu78 2 days ago 1 reply      
This is a real gem of a comment:https://news.ycombinator.com/item?id=13571160

"petty abstractions" to assist reusability

Layer upon layer upon layer of indirection.

I want to be liberated from this mindlessness.

It is everywhere in the programming world.

Large sums of money are spent to maintain and propagate this way of thinking.

Large companies have been built upon it.

Is it possible to escape this?

These contributions from arcfide are heartening.

bakul 2 days ago 1 reply      
This is fantastic! I can sort of grok your code at a high level but only enough to appreciate it; not to do much with it as that would require investing a bunch of time.

Questions:1. Did you find anything really surprising? Some new algorithm or idiom or something that was surprisingly easy?2. Is it worth applying some of the same tricks in compilers for other languages?3. Have you considered using the same "frontend" to create an APL->Scheme translator? That might have more of a pedagogical value. Scheme has a great macro system and it is a HLL so can be a good IL at the very least -- I can then translate the compiler itself to Scheme and study :-)

nodesocket 2 days ago 3 replies      
I'm a complete a noob to APL (I come from web development and ops), but reading the Wikipedia there is a section that says:

The following expression finds all prime numbers from 1 to R. In both time and space, the calculation complexity is O(R^2).

How? What? I feel stupid now. ;-)

gwu78 2 days ago 2 replies      
From reading your blog, I detect that you like working in MS Windows.

Can your compiler be ported to BSD?

Here is the rationale for why this can be useful: BSD can in turn be ported to new hardware with reduced amount of effort, sometimes a project that is manageable by one person.

(Compare this to what is required for porting Windows, Linux, etc. to new hardware).

Imagine an APL computer with an open source kernel that is easily ported to new hardware.

breck 2 days ago 2 replies      
Really different and fascinating video, thanks!

Two surface level questions: 1) The "trains" design pattern you mention, do you have a link (to wikipedia or something) with more info? i might know the pattern by a different name 2) I liked the tree printout seen at 1:42. Is there a library or algo you used for that?

hzhou321 2 days ago 1 reply      
I understand that speed may not be an issue here but I still like to ask: won't so many nano-passes affect speed?
BuuQu9hu 2 days ago 2 replies      
I don't know any languages in the APL family. Which should I learn and why? Which should I definitely avoid and why?
panic 2 days ago 2 replies      
How much abstraction overhead does the ArrayFire library introduce? Do you think you could get better performance and simplify the overall system by generating CUDA or shader code yourself?
pavanky 2 days ago 2 replies      
Hi Aaron, I saw the link earlier but did not get the time to check it out earlier. Someone from arrayfire pointed out that you were using arrayfire and now I recognized your name / id from the arrayfire forums :)

Anyway, I am giving it a listen now.

If I had to ask a question, what is something that frustrated you about arrayfire and what do you think needs fixing ?

psandersen 1 day ago 1 reply      
This looks interesting, but getting a feel for the implications is pretty difficult without spending a lot of time.

A discussion of the implications in terms of speed, robustness, maintainability and future hardware would be really useful.

E.g. if some team did the crazy undertaking of re-writing all the software (kernel included) in a base ubuntu install using APL on the GPU what would the positives vs negatives be? Is this even possible/desirable or does it require research breakthroughs? Is this more suited for DSP functions, or even extendable to probabilistic programming?

What is the best hardware match? Does this offer advantages to push synthesise programs to FPGA's, or some of other hardware (can't remember the name, many small dumb cores with limited memory; maybe a bit like the Cell processor).

jonahx 2 days ago 1 reply      

What are your thoughts on J vs APL? Also, is APL your preferred language for most kinds of problems? When it is not, why not?

throwaway7645 2 days ago 1 reply      
Is parallel programming really that easy in Dyalog APL? Could someone provide a trivial example?
Yahivin 2 days ago 1 reply      
I love the fearlessness. You're now one of my heroes.
erikpukinskis 1 day ago 2 replies      
Look at this guy's code and you start to understand how we'll get to the Matrix situation of being able to watch scrolls of random-looking symbols and picture people and food and cities in our head.
zzzcpan 2 days ago 1 reply      
Is there a transcript?
beagle3 2 days ago 2 replies      
Haven't had time to watch the youtube video ...., but here's a question related to APLing in general:

Do you find that the GPU angle makes people more receptive to the ideas behind APL? In my own experience, APL/K/J quickly get a SEP field applied to them (is also visible in the discussions linked by dang), some people dismiss it outright, and some say "well, it might have been easier to grasp if the syntax wasn't so obtuse" (which is demonstrably wrong). I've given up on evangelising APL/K, but being such a perfect match for GPU semantics, maybe there's hope for more acceptance? What's your take?

       cached 8 March 2017 05:11:01 GMT