hacker news with inline top comments    .. more ..    22 Nov 2016 News
home   ask   best   3 years ago   
Pipfile for Python github.com
101 points by shakna  2 hours ago   32 comments top 11
witten 46 minutes ago 3 replies      
This seems like a huge step backwards to me. Why would you want to go from a parse-able, machine-readable, data-driven syntax to something that can't easily be introspected, isn't machine-readable without firing up a full Python interpreter, is as flexible as actual code and is thus subject to all the abuse you can introduce with actual code, etc..

From the original prototype's comments: https://gist.github.com/dstufft/2904d2e663461f010bbf

"- If there's a corner case not thought of, file is still Python and allows people to easily extend"

... and ...

"- Using Python file might cause the same problems as setup.py"


Also, how is the "lock file" actually distributed? Unless you can pip install a wheel and have it include an embedded lock file, then you've still got to have some out-of-band mechanism for copying the lock file around like you would a requirements.txt or even a fully-fledged virtual environment.

fphilipe 20 minutes ago 0 replies      
This is great news. Coming from Ruby and being used to Bundler, doing anything in Python or JS always was a huge pain. Countless times I deleted the current virtual environment or did an `rm -rf node_modules` to start fresh. So I'm excited to see Yarn for JS show up and now this.

The main problem with requirements.txt, as I see it, is that you don't get exact versions unless you specify it in your requirements.txt. So you'd have to have a loose requirements.txt and then generate a second requirements file after having done `pip install -r requirements.txt` to get the exact versions that were installed.

Further, if you happen to "accidentally" `pip install some-package` in your virtual environment, your app might now be using different packages locally without you noticing. With Pipfile the need for virtual environments is pretty much gone, assuming that at runtime it will automatically load the version of a package specified in the lockfile, which is not clear to me yet from the README.

examancer 1 hour ago 0 replies      
This is almost identical to how bundler in Ruby works, right down to the language native dependency DSL, named groups, file name conventions (Pipfile = Gemfile, Pipfile.lock = Gemfile.lock), and deterministic builds.

It's identical because bundler mostly got it right and dependency management in Ruby, while still not great/perfet, is better than just about everywhere else.

Kudos to python for moving forward.

holografix 40 minutes ago 0 replies      
Can't we just leave as it is? I love the fact that requirements.txt is stupid simple. No over engineered json shenanigans. Wanna group things into prod, dev etc? Create 2 files.
nubela 1 hour ago 3 replies      
I don't understand the need for this. The main benefit of the Pipfile seem minute, and can already be achieved with requirements.txt.

But I'm sure I'm missing something. Please feel free to convince me I'm wrong :)

misterbowfinger 1 hour ago 1 reply      
About time. The Ruby, Elixir, and Rust communities are far, far along in their package management tools. Working with pip feels like going back in time these days.
Animats 45 minutes ago 1 reply      
If you need old-version retrieval on a routine basis, library regression testing has a problem. This gives package developers an excuse to break backwards compatibility. Historically, Python has avoided that, except for the Python 3 debacle.
nicois 1 hour ago 0 replies      
this is what I have used for the past few years. I put my unversioned requirements in a subdir and run this when I want to bump versions.

#!/bin/bash -xWHEELHOUSE="/usr/local/wheelhouse"[ -d "$WHEELHOUSE" ] || ( sudo mkdir -p /usr/local/wheelhouse/ ; sudo chmod -R 0777 /usr/local/wheelhouse/ )deactivateset -ecd .requirementsfor reqfile in requirements*txt ; do TEMPDIR="$(mktemp -d)" virtualenv -ppython3 "$TEMPDIR" . "$TEMPDIR"/bin/activate pip install -U pip pip install -U wheel pip wheel --find-links="$WHEELHOUSE" --wheel-dir="$WHEELHOUSE" -r $reqfile pip install --find-links="$WHEELHOUSE" -r $reqfile pip freeze | grep -v "pkg-resources" | sort > "../$reqfile" rm -rf "$TEMPDIR" donewait

shadowmint 7 minutes ago 0 replies      
First of all, this is great, and I'm hugely in favour of something like this going forward. Great work!

...however, I strongly disagree on the benefit of making the `Pipfile` executable python. Just read this gist: https://gist.github.com/kennethreitz/4745d35e57108f5b766b8f6...

 - This file will be "compiled" down to json.
Then why does it exist?

We know it'll be abused; and we should have learnt our lesson from scons and setup.py that it wasn't a great idea before, and still isn't a great idea using python code itself as a declarative DSL. Just use a standard hierarchical file format (json, toml, xml, whatever)

Features of introspecting and editing `Pipfile.lock` should be rolled into pip and exported as a core python module; an api for editing pipfile.lock is a good idea, but executing a `Pipefile`, is not.

dimino 1 hour ago 0 replies      
Very happy about this, the Pypa crew are saving Python from itself.
breerly 1 hour ago 0 replies      
Wth happened to pip-tools? The Python ecosystem is a mess...
India's Misguided War on Cash bloomberg.com
33 points by walterbell  53 minutes ago   21 comments top 10
jychang 31 minutes ago 2 replies      
> Anyone with outstanding notes must either deposit them in a bank -- potentially incurring a tax -- or exchange them for replacements in strictly limited sums.

Incurring a tax is the entire point of replacing the old currency. A large portion of the black untaxed money is untraceable in the form of cash, so getting them into banks and taxed is the first step.

This article is overly negative about some negative things that are part of the expected outcome, that even the people who are proponents of the change admit. Of course there are obvious drawbacks to invalidating most of a nation's currency at once, the point is how this process benefits the country.

A lot of poorer Indians barter or use smaller cash denominations anyways, and the government isn't trying to crack down on that; comparing India to Nigeria in that sense doesn't matter. And reducing the economy of the entire country temporarily is very intentional- if a big part of your country's business dealings is with illicit cash, cracking down on it will of course reduce the market.

The idea is to sacrifice some money in order to reduce corruption and increase the trust in government and business systems. If that is successful, then this move is very beneficial for India in the long run.

chdir 13 minutes ago 1 reply      
"Its like the PM is cleaning a pond to kill the crocodiles, forgetting that crocodiles can survive even on land. Big crocodiles have survived. Small fish have died." [1]

The immediate reaction of the common man on de-monetization of higher currency notes (followed by replacement with newer ones) was euphoric. A popular thought process was that people can tolerate inconvenience for a few days because it'll invalidate the "black money" [2] and thus rein in inflation, property prices and corruption to some extent. However, the moment this was announced, people have been brazenly finding loop-holes to convert their currency notes through unofficial means. Example : back-dated receipts for expensive items (gold, cars, property), fake accounts, hiring poor folks for conversion via official channels, bulk purchase of coupons / cards for essential items that are permitted to accept older notes, etc..

[1] http://indianexpress.com/article/india/india-news-india/demo...

[2] Untaxed money, undeclared income, ill-gotten money via bribery, fraud etc..

filereaper 29 minutes ago 1 reply      
Not quite sure what the author is trying to get at, any change affecting the entire population of India will of course be 'immensely disruptive'

There's no war on cash, theres a refresh on the notes, its an attack on black money, its time to pay the dues if you've been hoarding illegal revenues. India isn't getting rid of cash, the comparisons to Nigeria are pointless.

avighnay 3 minutes ago 0 replies      
Here is a financial expert view on it https://www.youtube.com/watch?v=W-Bjw_SFNdM,

There are multiple opinions and counter opinions but no one seems to know exactly how this is going to turn out. Its wait and watch for now...

anilgulecha 15 minutes ago 0 replies      
The article is using unrelated statistics (countries by % of GDP in cash) and connecting this to why the experiment may fail. This is humbug.

The biggest threat looks to be the methods of laundering that are cropping up -- I'm starting to hear of instances where offline-chits and money transfer agents are accepting old currency without any markdown. So it looks like laundering hasn't received as large a blow as anticipated, and hence the experiment may very well fail by the Dec 30 deadline.

One positive thing to note is that a majority of the citizens are tired of the parallel economy and are openly supporting the government's moves. Hence it should give the Modi government the cover and incentive to add-in more stringent checks and regulations to take on the laundering. This is probably the only hope for any success in this experiment.

BTW, whether this experiment succeeds or fails, it's a net positive in the long run as it has shown: 1) that a government can take steps towards stopping the parallel economy, and 2) do so with the people support.

sschueller 26 minutes ago 0 replies      
This war on cash is disturbing especially from governments and financial institutions which have zero transparency.

UBS is calling for the removal of the 100 bill in Australia[1] and the same is happening in other countries.

Thankfully no matter if they manage the kill all cash. Cryptocurrency is in a way digital cash.

These institutions don't realize that their power grab is pushing people more and more into currency which they have zero control over.

[1] http://www.theaustralian.com.au/business/financial-services/...

linux_devil 14 minutes ago 0 replies      
I am based out of India and I find this article incomplete. Corruption is a very big problem in India , and lot of bribe is paid in cash. For eg: If you want to buy a house which is sold for 20 million INR , value of same property is 5 million on papers (to evade house tax etc. ) and rest of the money is paid in cash or through some black money . Real estate stocks started sinking down 5 days straight and are expected to sink further after this move. Yes it causes inconvenience to the people but as a tax payer and contributor to the economy I dont mind small inconvenience for greater good.
woof 10 minutes ago 0 replies      
The editors at Bloomberg should do their job (edit)!

Read this FT article instead:


gnipgnip 8 minutes ago 0 replies      
Indians have far too much trust on the state (well Modi, to be precise) - that's precisely why old-notes are quickly falling off of circulation. There appears to be plenty more possibly draconian measures in the pipeline.



P.S: Nigeria's per-capita income is higher than India in nominal terms (so is Africa's as a whole).

tn13 25 minutes ago 1 reply      
This article stinks of bias.
Mastering Time-to-Market with Haskell fpcomplete.com
32 points by Tehnix  2 hours ago   14 comments top 8
millstone 6 minutes ago 0 replies      
IME Haskell development has a sort of bell-curve to it. Initially, you're spending a lot of time prototyping, fumbling around trying to find the right abstractions. Here Haskell mostly gets in the way: you have to declare up-front which functions do I/O, etc.

But once the core abstractions are settled, you start to reap its power. The type system catches tons of potential errors. Combinators allow for enormous expressiveness. Here you're rolling: Haskell is in its zone!

But then you hit a wall. Laziness makes for brutal debugging. Singly linked lists actually suck. Performance optimization is a black art. You find yourself longing for a language with simple semantics and mechanical sympathy. Now Haskell is bumping up against the real world.

Haskell has its sweet spot somewhere between "bang this out by 5pm" and "ship this to a million users". (No surprise it's popular in academia.)

kinkdr 47 minutes ago 1 reply      
Although very fresh into the Haskell world myself, I tend to agree with the author that, when I know what I am doing, my Haskell code is usually written in less time and has less bugs.

Having said that, Time-to-Market is only partially influenced by my-code, the biggest part is the code that I don't have to write, i.e. third-party libraries.

In my Haskell adventures I am having trouble finding third-party libraries for even the most popular things, e.g. Cassandra. As far as I can tell there are two libraries, the `cassandra-cql` and the `cql-io`, the first hasn't been updated for a year now, and the second has only 3 stars, which makes me uneasy.

So, although I can see where the author is coming from, I don't think you can beat Java, Ruby, JS or Python in that sense. Unless of course your code/project doesn't have a lot of dependencies.

tmptmp 21 minutes ago 0 replies      
Warning: be warned before you commit to Haskell. Not all is rosy about Haskell. You may find yourself in quagmire if you don't know for sure what you are going to get from Haskell, especially from the libraries. Although this is true for other languages also, the library support for Haskell is yet far from satisfactory as compared to the library support found for Python/Java. The Haskell community seems to have been divided over it.

Not so ago there was some discussion about "batteries" included with Haskell. [1] It compared situation of Haskell with that of Python/Java etc, worth reading if you are about to go the Haskell route.

It seems, the priorities (academic, commercial, library support and so on) of the members of Haskell community are at crossroads and they cannot seem to resolve those very good, IMHO.

My take: Haskell is good for learning some really deep concepts, but may be not so good when it comes to commercial projects, unless you are a Haskell veteran and also have an army of Haskell veterans with you.

[1] http://osdir.com/ml/haskell-cafe@haskell.org/2016-10/msg0001...

stanislavb 7 minutes ago 0 replies      
And I believe that based on the enlisted properties/features, Elixir may qualify pretty well, too. What is more, Elixir may be a better choice in regards of time-to-market + developer happiness.
kriro 27 minutes ago 0 replies      
The article assumes an existing team which is a bit problematic when talking about time to market. If you start the analysis earlier (two people discussing some ideas in a coffee shop) then I'd argue that a language like Haskell can be problematic if your metric is time to market. You might very well make that up later by having a more robust code base or reaping any of the other asserted benefits but the existing gallery of premade and tested building blocks in other languages seems to be richer. It's probably also going to be harder to add people to your team (on average).

I would have liked to see a comparison to other functional languages (say Elixir or OCaml) and not just Java and C#. I'd also argue that picking Java instead of a more agile environment (there are some cool lightweight Java frameworks but most people will associate it with the rather heavy enterprise stack) when comparing regarding time to market is a bit odd. Granted I'm mostly thinking about webapps (but the article mentions Yesode).

Still a nice article (since my post sounds overly negative upon rereading).

almata 8 minutes ago 0 replies      
If you were a developer with 10 yoe looking for something new to get into, what would you choose at this moment and thinking about the near future: Haskell, Scala or F#?
mirekrusin 42 minutes ago 0 replies      
Why comparisons to C#/Java are mentioned so many times but not a single mention of F#?

Maybe with F# author would see decrease of development time compared to Haskell?

pron 1 hour ago 2 replies      
> In summary we've seen that: Haskell decreases development time...

Have we actually seen that or have you just asserted that? Is this really true, and if it is, by how much? Haskell has been around for a couple of decades now, and has had least two hype cycles (I remember that when I was in university in the late '90s, Haskell was the next big thing). It does not seem to expand significantly even within organizations that have tried it (and that's a very negative signal), with at least one notable case where the language has been abandoned by a company that was among the flagship adopters.

In general, we know that often linguistic abstractions that seem like a good idea in theory -- or even seem to work nicely in small programs -- don't end up having a significant effect on the bottom line when larger software is concerned. People say that scientific evidence of actual contribution is hard to collect, but we don't even have well-researched anecdotes. Not only do we not have strong evidence in favor of this hypothesis, but there aren't even promising hints. All we do have is people who really like Haskell based on its aesthetics and really wish that the the nice theoretical arguments translated to significant bottom-line gains.

This blog post by Dan Ghica, a PL researcher, really addresses this point: there is nothing to suggest that aesthetically nice theory translates to actual software development gains, and wishful thinking (or personal affinity) simply cannot replace gathering of data: http://danghica.blogspot.com/2016/09/what-else-are-we-gettin...

4-bit calculator made from cardboard and marbles lapinozz.github.io
483 points by MaxLeiter  12 hours ago   143 comments top 16
ythn 11 hours ago 25 replies      
Thought experiment: if sentient AI is possible with nothing more than software, does that mean if you "load" the sentient AI program into a "computer" made of cardboard and marbles, that the cardboard and marbles will be self-aware?
toomanybeersies 11 hours ago 0 replies      
From the woodworking site posted a couple of days ago, Matthias Wandel made one out of wood:


ChuckMcM 11 hours ago 1 reply      
People build these, and they are fun, and I'm surprised that they don't still make and sell the Digicomp[1]. (yes I know some people have been doing limited runs but it seems like there should be a persistent market for it)

[1] https://en.wikipedia.org/wiki/Digi-Comp_I

brownbat 8 hours ago 0 replies      
These remind me of Dr. Nim, the unbeatable single-player board game from the 60s, which also used marbles to do binary math:


donquichotte 11 hours ago 0 replies      
As one of the comments on the bottom of the page points out, Canadian woodworker Mathias Wandel built a similar machine some time back: https://www.youtube.com/watch?v=GcDshWmhF4A
ReedJessen 11 hours ago 0 replies      
A literal example of race conditions ;)
sweetjesus 10 hours ago 0 replies      
this calculator nicely illustrates an insight that I feel gets lost the way most people learn and reteach some simple CS concepts:

"two's complement" is not a different system for arithmetic that includes a "sign bit", it's just a different encoding or labelling of states which happens to have a bit that reflects the sign. So, inputs to this calculator can be said to go from 0-15, but more interestingly it can also add numbers in the range -8 to +7 (and therefore, it can also subtract, though it can't negate so you'd have to manually do that to your input by performing a different encoding table lookup).

(edit: now I'm realizing you could negate by performing a two's complement multiplication by -1, performed using this calculator via a sequence of 3 (shift+adds) of your input number to itself... that's correct at least up to some fencepost)

And then by extension, you could test "what about treating the range as -10 to +5", would that encoding succeed or break down? for starters, you would no longer have a sign bit...

bcgraham 11 hours ago 3 replies      
This is a cool project. But when I was in school, if a classmate of mine came in with this, it would have really rustled my jimmies. Some kids have much better resources than other kids.
moolcool 4 hours ago 0 replies      
I won't be impressed till they run DOOM or the Linux kernel on it
e19293001 4 hours ago 0 replies      
Here's a guy who made a 4-bit adder through water[0]. Creating a computer through water seems to be really doable. I'd be interested to watch more stuffs like this.

[0] - http://www.blikstein.com/paulo/projects/project_water.html

mdonahoe 3 hours ago 0 replies      
I suggest adding a binary-to-decimal decoder at the bottom to make demos more exciting for people who can't read binary

I did that for a 4bit knex adder/subtractor

bbcbasic 10 hours ago 3 replies      
Is the AND gate really an AND gate? It seems to be a counter. More like a .skip(1) gate.
qwertyuiop924 11 hours ago 0 replies      
This is probably the coolest thing I have seen today.
tmaly 9 hours ago 0 replies      
Wow, that is pretty cool. It made me think of the book The Difference Engine.
gohrt 4 hours ago 0 replies      
frostirosti 8 hours ago 1 reply      
Now implement carry look ahead
Goldman Sachs Drops Out of R3 Blockchain Group wsj.com
171 points by JumpCrisscross  9 hours ago   55 comments top 7
buckie 7 hours ago 2 replies      
I can't say I'm all that surprised. R3 serves as both an R&D lab and a place to connect vendors and potential clients. Moreover, the focus is on private/permissioned blockchain tech. The tech in this space is still quite immature overall and pretty much everyone has had a hard time building actual blockchain infrastructure vs things that are sorta like a blockchain. If you can't show clients value for their membership in R3 (because the tech's not there) then what's the value of staying a member (especially if you've seen all the vendors in their portfolio already) vs rejoining when things change? This isn't R3's fault of course, as I think the tech in this space is just getting started and they're in a great place to help it along; members will come and go.

I spent a year investigating blockchain applications for JPM so I've seen most of the permissioned blockchain solutions that are out there. Unfortunately, none of the solutions I saw were real blockchains. The more technical groups in banks (which GS has) surely realize this as well. The solutions I've seen consistently lack at least one of the following necessary features:

- Fast, Deterministic BFT Consensus (mining doesn't work as intended in private contexts[1])

- Smart Contracts (you need a deterministic language)

- Signed Transactions (TLS based authentication trades security & audibility for speed)

- Cryptographic data structure for transaction storage

There are other features you need, but these are the big distinguishers I noticed. While technically you don't even need smart contracts or fast BFT consensus, I believe the tech isn't useful enough to justify the migration costs without them.

Disclaimer: I'm a founder in the private blockchain space[2] and founded specifically to make an infrastructure that addresses these issues.

[1]: http://kadena.io/blog/MiningInPrivate.html[2]: http://kadena.io

brilliantcode 5 hours ago 1 reply      
This probably means that they couldn't find a problem worth solving with blockchain. Not a good sign for blockchain companies in 2017, expect to see more big names ditching blockchains and hype train finally coming to a stop.

I predict we will see a huge hammer come down from SEC & IRS surrounding ICOs in 2017 as well.

gregoryrueda 8 hours ago 2 replies      
Is this so they can build their own Blockchain?
known 3 hours ago 0 replies      
Reasons for internal resistance to blockchain tech at banks

Regulatory 63%

Compliance 56%

Security 31%

Cost 19%


ArchReaper 9 hours ago 4 replies      
Anyone have a non-paywall link?
VladKovac 8 hours ago 4 replies      
Angry rhetoric against middlemen, how predictably simplistic of you.
How we made our React-Rails app 5x faster progressly.com
24 points by amk_  2 hours ago   12 comments top 4
misterbowfinger 1 hour ago 2 replies      
1. Make sure you're using the latest stable version of React core

2. Get a CDN for static assets

3. Use Webpack, again, for making development easier

Saved you a click.

prymitive 16 minutes ago 0 replies      
Is it just me or is every story titled "10 tips on making X faster" or "how we migrated from X to Y and reduced number of servers from 10k to just 2" or "how we speed up Z by 400000%" is really just a story of "how we made our project better by stopping, checking what we really need to do, researching how to do it and then investing time to actually do it right"? There is hardly ever any new ground breaking wisdom, new revolutionary algorithm or amazing new tool, it's all about knowing what you're doing, which usually takes a few tries.
ciconia 44 minutes ago 2 replies      
> Last week we deployed an update to our React-Rails app that improved load time by 500%

By "improve" they surely mean reduce. How can a measurement be reduced by 500%? If load time has decreased to one fifth, it would have been clearer to write "load time was reduced by 80%"?

diziet 1 hour ago 2 replies      
How do they define 5x faster? http request to browser renders the page usable?
Syscall Auditing at Scale slack.engineering
138 points by knoxa2511  7 hours ago   18 comments top 8
amluto 3 hours ago 4 replies      
As the kind-of-sort-of maintainer of Linux's syscall infrastructure on x86, I have a public service announcement: the syscall auditing infrastructure is awful.

It is inherently buggy in numerous ways. It hardcodes the number of arguments a syscall has incorrectly. It screws up compat handling. It doesn't robustly match entries to returns. It has an utterly broken approach to handling x32 syscalls. It has terrifying code that does bizarre things involving path names (!). It doesn't handle containerization sensibly at all. I wouldn't be at all surprised if it contains major root holes. And last, but certainly not least, it's eminently clear that no one stress tests it.

If you really want to use it for production, invest the effort to fix it, please. (And cc me.) Otherwise do yourself a favor and stay away from it. Use the syscall tracing infrastructure instead.

jtakkala 4 hours ago 0 replies      
Great idea. I always thought that it's essential to log events in realtime to a remote system that is secure and harder to compromise to modify the logs post-intrusion. Way back in the day it was suggested to do this to an entirely offline system by cutting the rx pins on a parallel cable, thereby only allowing the one-way transmission of logs to the log server. I don't know if anyone ever did that in practice though.

Anyways this invites the question, are you allowing your production servers to make outbound internet connections? Generally, I would proxy outbound connections and/or use internal mirrors and repos for the installation of software.

nwmcsween 9 hours ago 1 reply      
The issue with enabling syscall auditing is the overhead it introduces, iirc some around two orders of magnitude, as in 200000/s -> 3000/s. I would just use seccomp-bpf filters on a per program basis as the overhead there according to benchmarks is much less.
ejholmes 5 hours ago 0 replies      
Great article. I'd also recommend people take a look at ThreatStack for aggregating syscall events: https://www.threatstack.com/.
aliakhtar 4 hours ago 1 reply      
Would this be useful in a containerized architecture where everything is run as a container?
peterwwillis 1 hour ago 0 replies      
If you want similar functionality to the first questions without audit, try netfilter. It's still shitty logs, unfortunately, but so is most monitoring.
mistertrotsky 10 hours ago 1 reply      
This is super great.
mburns 5 hours ago 1 reply      
Adam Ondra Completes Record Ascent of the Dawn Wall outsideonline.com
63 points by aruss  4 hours ago   14 comments top 9
peace011 34 minutes ago 0 replies      
Keep in mind that although he did it in a single push of 8 days, he has actually been up and down the wall for a lot longer, freeing and aiding every pitch many times. I was on the Nose almost a month ago and I could see his portaledge hanging out there. His accomplishment is really amazing and the Dawn Wall is so damn blank! No idea how these people get so good!
ethbro 19 minutes ago 0 replies      
Plug for Valley Uprising (on Netflix) if you're interested in climbing. The progression of difficulty and (mostly friendly and supportive) competition between climbers is inspiring.

Amazing feat!

PS: Would be happy to take anyone else's recommendations for videos or books too.

toppy 4 minutes ago 0 replies      
How it compares to what Alex Honnold achieved on The Nose? Difficulty or size is different?
frrp 15 minutes ago 0 replies      
Following in photos is much better than just reading about Ondra: www.instagram.com/adam.ondra/
jdale27 1 hour ago 1 reply      
That is fucking amazing. I can't even comprehend what it must be like to be living on that wall for eight days (let alone 19 or 28). The mental and emotional state it must put you in...
Fricken 1 hour ago 0 replies      
I wonder if in my lifetime we'll ever see a big, sustained free climb go up that surpasses the Dawn Wall. It took Tommy Caldwell 7 years just to unlock it. Testpieces like that don't come around very often.
wolf550e 50 minutes ago 2 replies      
Are there more photos of what this climb looks like? Maybe a video?
raftaa 1 hour ago 0 replies      
Yeah, but it was also nice to see him struggling with his nervousness. Just like the rest of us. Maybe he's just human too.
buzzdenver 1 hour ago 0 replies      
Adam is a total bad-ass and amazing how he's world class in so many disciplines and styles of climbing.
Microsoft planning to enable x86 on ARM64 emulation in Windows 10 by Fall 2017 zdnet.com
168 points by benologist  12 hours ago   74 comments top 15
matt_wulfeck 4 hours ago 1 reply      
> Continuum -- the capability that will allow Windows 10 Mobile devices to connect to external displays and keyboards -- is going to be a key for the company

This actually sounds like a very good move by Microsoft. Just issue people a phone and they will do all their work on that. There's really no need for giant workstations anymore, and I think this will be more successful than a Chromebook-type thing.

novaleaf 5 minutes ago 1 reply      
mojang? could someone please inform me how it's related? (mentioned in the article that this tech comes from mojang, the makers of Minecraft (?!?!?!))
haberman 8 hours ago 2 replies      
> He noted that this technology seemingly has a new name, "CHPE." [...] My guess is the HP here is HP, as HP has been working increasingly closely with Microsoft on its Windows 10 PCs and the HP Elite x3 Windows Phone. (Maybe the "E" is for emulation?)

The binary file format on Windows is called PE (portable executable). I wonder if this might possibly be a fat binary format.

glandium 9 hours ago 3 replies      
Anyone else reminded of FX!32 on Windows NT for Alpha? https://en.wikipedia.org/wiki/FX!32
TazeTSchnitzel 10 hours ago 2 replies      
Before someone says WOW64 isn't an emulator, the article isn't actually wrong. 64-bit Windows originally meant Itanium, and WOW64 was an emulator on that platform. Of course, it isn't (very much of one?) on AMD64.
mmastrac 7 hours ago 1 reply      
How awesome would it be if we could have a PnP processor? If you are docked, you run native x86 code. If you are mobile, you emulate it. The docking/undocking process could even be similar to VMotion.
nixos 1 hour ago 0 replies      
My question is how would this affect the market?

Apple will stay Apple. I don't think they'll go anywhere.

The question is Google. If this happened in 2008, I don't think Android would have taken off anywhere close to the way it did.

But now? One one hand, Android has millions of apps already on the market. On the other hand, Microsoft now has potentially millions of old, existing, applications.

I don't think it will make a dent in the phone market. It's too commonly used as a hand-held rather than a station, and windows apps are useless there.

On the other hand, it can tank the Android tablet market

webaholic 11 hours ago 2 replies      
Please note that you can kind of already do this using qemu on both windows and linux.
wfunction 10 hours ago 4 replies      
How would this actually work? Wouldn't it be painfully slow? Even ARM emulation on x86 seems to be pushing it, and I feel like the reverse should be 10x worse at best...
markingram 2 hours ago 1 reply      
Would this impact HoloLens? I am so in love with it, and how can you not be, just look at this... https://www.youtube.com/watch?v=6vjlXJCcUUc
dynjo 5 hours ago 0 replies      
This may play very well for Apple also if they intend to move to ARM CPUs in MacBooks. Bootcamp would have been a major blocker.
whitehat2k9 8 hours ago 0 replies      
What are the performance implications of this? Isn't cross-instruction set emulation typically associated with substantial performance hits?
cordite 9 hours ago 3 replies      
Will we start seeing "Universal" apps on windows too, as in the fat binaries which have multiple architectures compiled into one?
raverbashing 10 hours ago 1 reply      
I wonder if the objective is only mobile devices or also the server market
mtgx 10 hours ago 0 replies      
Long overdue, but still welcome. Intel direly needs the competition. With AMD Zen and ARM notebooks coming, hopefully the market will look much more competitive in 2018.

Also, maybe Microsoft will have the guts to do what Google never did: standardize ARM processors, so that all ARM devices can be updated at once. Although I assume Microsoft will also start by supporting "Qualcomm-only" at first, just like it did for phones.

Giant 'Great Valley' Found on Mercury thescienceexplorer.com
42 points by lucodibidil  5 hours ago   6 comments top 4
nitrogen 4 hours ago 2 replies      
> "This is a huge valley. There is no evidence of any geological formation on Earth that matches this scale," said Laurent Montesi.... The valley is about 250 miles wide and 600 miles long, with steep sides that dip as much as 2 miles below the surrounding terrain. To put this in perspective: if Mercury's "great valley" existed on Earth, it would be almost twice as deep as the Grand Canyon and reach from Washington, D.C. to New York City, and as far west as Detroit.

Although they were formed by a different process, this does sounds kind of like Earth's oceans. Do they not qualify as a geological formation?

deepnotderp 2 hours ago 0 replies      
Take that silicon valley ;)

In all seriousness, this is very interesting and sounds kinda similar to glaciers.

egfx 3 hours ago 0 replies      
What's the temperature down there?
ktRolster 3 hours ago 0 replies      
ASP.NET Kestrel SuperCharged MemoryPoolIterator (Pull Request) github.com
26 points by philliphaydon  4 hours ago   3 comments top 2
Arnavion 4 minutes ago 0 replies      
If anyone wonders like I did what markdown magic could've made the collapsible "Details" sections in the second comment, it's actually standard HTML5 - https://developer.mozilla.org/en-US/docs/Web/HTML/Element/de...
hitr 2 hours ago 1 reply      
It seems like kestrel part of .net core got some amazing performance improvements contributions from the open.I see the performance of kestrel is much better than any versions of IIS +ISAPI or IIS7 + Asp.net modules/handlers ever produced.May be this is partly to do with how simple the middleware (just a function/method it is ).But the request parsing logic got really well and I see that kestrel could hit 5 Million RPS disucssed on this talk[1]( compared to 50K of old asp.net) . Some crazy optimizations and benchmarks are discussed in that video like static byte arrays,memory pools,custom awaiter,bit manipulations for string comparisons etc

Kestrel will be one of the best when it comes to benchmark

[1] https://vimeo.com/172009499

[edit] added video url

Amazon S3 and Glacier Price Reductions amazon.com
173 points by jeffbarr  6 hours ago   98 comments top 18
DanBlake 4 hours ago 6 replies      
Would really like to see some massive reductions in the operation costs and most importantly, bandwidth costs.

The bandwidth costs are so far out of line with what the network transfer actually costs, it just feels like price fixing between the major cloud players that nobody is drastically reducing those prices, only storage prices.

Charging 5 cents per gigabyte (at their maximum published discount level) is the equivalent to paying $16,000 per month for a 1 gigabit line. This does not count any operation costs either, which could add thousands in cost as well, depending on how you are using S3.

There are several providers that offer a unmetered 1gbps line PLUS a dedicated server for ~600-750/mo. Providers like OVH offer the bandwidth for as little as 100/month. ( https://www.ovh.com/us/dedicated-servers/bandwidth-upgrade.x... ) I am just not sure how amazon can justify a 160x price increase over OVH or a 30x increase over dedicated server + transfer.

For the time being, the best bet is to use S3 for your storage and then have a heavily caching non amazon CDN on top of it (like cloudflare) to save the ridiculous bandwidth costs.

cpkpad 5 hours ago 6 replies      
Well, the costs are nicer, but mostly, Glacier goes from an unusable pricing model to a usable one. I was terrified to use Glacier. The previous model, if you made requests too rapidly, you might be hit with thousands of dollars of bills for relatively small data retrievals -- very easy to make a very expensive bug.

I had wanted Amazon to wrap it in something where they managed that complexity for a long time. Looks like they finally did.

Now the only thing Amazon needs to do is expand free tiers on all of their services, or at least very low cost ones. I prototype a lot of things from home for work -- kinda 20% time style projects where I couldn't really budget resources for it. The free tier is great for that. All services ought to have it -- especially RDS. I ought to be able to have a slice of a database (even kilobytes/tens of accesses/not-guaranteed security/shared server) paying nothing or pennies.

Alex3917 5 hours ago 6 replies      
While I'm not going to complain about a price reduction, I'd honestly be more excited if S3 implemented support for additional headers and redirect rules. Right now, anyone hosting a single page app (e.g. Angular/React) behind S3 and Cloudfront is going to get an F on securityheaders.io.

And even worse, there is no way to prerender an SPA site for search engines without standing up an nginx proxy on ec2, which completely eliminates almost all of the benefits from Cloudfront. This is because right now S3 can only redirect based on a key prefix or error code, not based on a user agent like Googlebot or whatever.

This means that even if you can technically drop a <meta name="fragment" content="!"> tag in your front end and then have S3 redirect on the key prefix '?_escaped_fragment_=', that will be a 301 redirect. This means that Google will ignore any <link rel="canonical" href="..."> tag on the prerendered page and will instead index https://api.yoursite.com or wherever your prerendered content is being hosted rather than your actual site.

Not only is it a bunch of extra work to stand up an nginx proxy as a workaround, but it's also a whole extra set of security concerns, scaling concerns, etc. Not a good situation.

edit: For more info on the prerendering issues, c.f.:



Perceptes 5 hours ago 6 replies      
Is anyone using either S3 or Glacier to store encrypted backups of their personal computer(s)? I've only used Time Machine to back up my machine for a long time, but I don't really trust it and would like to have another back up in the cloud. Any tools that automate back up and restore to/from S3/Glacier? What are your experiences?
codedeadlock 20 minutes ago 0 replies      
Has anyone tried to migrate to Backblaze. Their pricing seems really aggressive but I am not sure if we can compare Amazon and Backblaze when it comes to reliability.


lucb1e 1 hour ago 1 reply      
If costs matter to you, e.g. for home backups, don't buy Glacier (and heck don't buy S3). A 3TB drive costs about 110eur, so if you'd have to buy a new one every year (you don't) that'd cost 110/3/1000/12=0.31 cents per gigabyte per month. Glacier? 7 times more expensive at 2.3ct.

Hardware is usually not a business' main cost but it does matter for home users, small businesses or startups that didn't get funded yet, some of whom might consider Tarsnap or some other online storage solution which uses Glacier at best and S3 at worst. Now you could suddenly be 7 cheaper off if you do upkeep yourself (read: buy a raspberry pi) and if you throw away drives after one year.

ww520 4 hours ago 0 replies      
Is the outgoing bandwidth still the same price? Bandwidth cost is kind of high compared to other services.
woah 4 hours ago 4 replies      
What is the mechanism that makes it cheaper to take longer getting data out? Is it that they save money on a lower-throughput interface to the storage? Is it simply just market segmentation?
QUFB 5 hours ago 2 replies      
I currently use S3 Infrequent Access buckets for some personal projects. These Glacier price reductions, along with the much better retrieval model look really great.

However using Glacier as a simple store from the command-line seems horribly convoluted:


Does anyone know of any good tooling around Glacier for the command line?

jakozaur 1 hour ago 0 replies      
Great discount. I'm only surprised that Infrequent Access doesn't get any discount.

By the way, I wrote article, how to reduce S3 costs:https://www.sumologic.com/aws/s3/s3-cost-optimization/

physcab 5 hours ago 4 replies      
This a really dumb question, but since I've never used Glacier what does the workflow for a Glacier application look like? I'm used to the world of immediate access needs, and fast API responses, so I can't imagine sending off a request to an API with a response "Your data will be ready in 1-5 hours, come back later".
scrollaway 5 hours ago 2 replies      
Anyone else finding their S3 bill consisting of mostly PUT/COPY/POST/LIST queries? Our service has a ton of data going in, very little going out and we're sitting with 95% of the bill being P/C/P/L queries and only the remaining 5% being storage.

Either way, good news on the storage price reductions :)

deafcalculus 2 hours ago 0 replies      
Any chance Google will match this price for their coldline storage? I was planning to archive a few TBs in Google coldline, but Glacier is now cheaper and has a sane retrieval pricing model.
msravi 4 hours ago 0 replies      
> For example, retrieving 500 archives that are 1 GB each would cost 500GB x $0.01 + 500 x $0.05/1,000 = $5.25

Shouldn't that be $5.025? Or did I misunderstand?

questionr 2 hours ago 1 reply      
how does this compare to Google's Coldline storage?
jaytaylor 5 hours ago 5 replies      
EDIT: My mistake, this is the new S3 pricing! NOT Glacier pricing! Thank you res0nat0r.

Am I understanding this right? $0.023/GB/month for Glacier, so * 12 months/year = $0.276/GB/year, which means:

 10GB = $2.70/year 100GB = $27.00/year 1TB = $270.00/year ...
And this is only the storage cost. This doesn't take into account the cost should you actually decise to retrieve the data.

So considering a 1TB hard drive [0] costs $50.00, how is this cost effective? I can buy 5x1TB hard drives for the price of 1TB on Glacier.

I understand there is overhead to managing it yourself. So, is this just not targeted to technically proficient folks?

[0] https://www.amazon.com/Blue-Cache-Desktop-Drive-WD10EZEX/dp/...

thijsvandien 4 hours ago 1 reply      
With S3 Standard essentially getting S3 Standard - Infrequent Access storage pricing, where does that leave the latter?
user5994461 5 hours ago 4 replies      
That's both a good and a terrible change.

- The price reduction on S3 is good! Kudos AWS.

- The price change on glacier is a fucking disaster. They replaced the _single_ expensive glacier fee with the choice among 3 user selectable fee models (Standard, Expedited, Bulk). It's an absolute nightmare added on top of the current nightmare (e.g. try to understand the disks specifications & pricing. It takes months of learning).

I cannot follow the changes, too complicated. I cannot train my devs to understand glacier either, too much of a mess.

AWS if you read this: Please make your offers and your pricing simpler, NEVER more complicated.

(Even a single pricing option would be significantly better than that, even if its more expensive.)

Telescope That Ate Astronomy Is on Track to Surpass Hubble nytimes.com
102 points by dnetesn  10 hours ago   57 comments top 9
cmrx64 8 hours ago 2 replies      
One of my friends works at the STSCI on various visualization and simulation tools (all web-based, written in a modern stack!) for planning missions for this thing. It's a really really neat project.

The telescope itself is "scriptable" using a (truly ancient) version of JS via a really old implementation that I've seen referenced but can't find right now. There's a lot of open information about the JWST, but it's not widely reported on. Definitely worth checking out if you're interested in space, technology, and the systems we actually deploy into the void!

Some papers: http://www.stsci.edu/~idash/pub/dashevsky0607rcsgso.pdf


elihu 8 hours ago 6 replies      
I can understand the sense of "we just spent 8.7 billion dollars on this thing; we'd better not screw it up", but I wonder what the replacement cost would actually be if they had to launch another? Assuming they spent most of the money on R&D, it might actually be relatively cheap. Maybe we could even launch a couple smaller versions just to have backups and do the science tasks that don't require a full 21' mirror.

There's an interesting contrast between NASA's and Elon Musk's idea of what space exploration should be like; the former spends most of its efforts on one-off projects, whereas the latter is focused on making things cheap and repeatable and achieving reliability by iterating on a design rather than getting it perfect the first go-around.

Both approaches are needed, and certainly NASA paves the way for others to come along and do the same thing cheaper once it's been proven.

hackuser 8 hours ago 2 replies      
> A House subcommittee once voted to cancel it. Instead, the program was rebooted with a strict spending cap. ... The major change, said Jonathan P. Gardner, the deputy senior project scientist, was to simplify the testing of the telescope.

Ugh, that does not sound encouraging. Organizations love to cut testing and QA. After all, all they do is cost money, cause delays, and 'create' problems. Cutting them is an obvious way to bring a project under budget and on schedule.

Does anyone know what changes were made to testing?

We're going to be unhappy, and the advancement of astronomy delayed a long time, if something goes wrong. I don't see the next President and Congress wanting to raise revenue and spend more on science.

jrussino 8 hours ago 1 reply      
Last week Dr. Michael Ressler at JPL gave a talk about the Webb telescope, mostly focused on the mid-range infrared imaging sensor:


If you happen to be in the LA area, JPL hosts these talks once a month at their facility in Pasedena. They're free and open to the public, and they're always interesting. And if you can't attend they're also posted to the JPL website.

Animats 7 hours ago 0 replies      
"Ate astronomy" means it costs more than all the proposed big ground-based telescopes combined. The Overwhelming Large Telescope was considered too expensive at $1.5bn.

It's the F-35 of astronomy.

simosx 8 hours ago 2 replies      
The deployment of the telescope looks quite a complicated process.
SubiculumCode 3 hours ago 1 reply      
Please use a safe rocket to get into space. That would be one stressful rocket ride for the project scientists on earth.
happytrails 7 hours ago 1 reply      
well, you can't put a price tag on knowledge and discovery :)
elihu 9 hours ago 2 replies      
This may be obvious to everyone else, but today I learned that https://en.wikipedia.org/wiki/James_E._Webb is not the same person as 2016 Democratic primary candidate https://en.wikipedia.org/wiki/Jim_Webb

I feel a little better know. I was wondering why we would name a space telescope after that guy.

Debian considers merging /usr dralnux.com
22 points by dengerzone  3 hours ago   7 comments top 3
jeena 45 minutes ago 2 replies      
So what are the arguments actually?

A couple of days ago I was reading some POSIX book from 1991 and there the layout of /bin /lib /shared /usr/name/bin /usr/name/lib /usr/name/shared and so on was much more logical than what we have now which is just weird as far as I can see because I don't understand it.

mastazi 1 hour ago 0 replies      
HN's hug of death. Cached version is here: http://webcache.googleusercontent.com/search?q=cache:s8ApOo1...
marcoperaza 17 minutes ago 0 replies      
The linked email is pretty sparse on details. More information: https://lwn.net/Articles/670071/
The people trying to save programming from itself killscreen.com
4 points by phodo  57 minutes ago   1 comment top
tehwalrus 0 minutes ago 0 replies      
This sounds like something that should have a simple description, but I'm not sure the article tells enough to work out what it is.

(Unless the simple description is Not Invented Here syndrome, which I imagine it's not).

I had a quick look at the website linked at the end but it's just a site with a list of projects like a text editor and some game debugging components.

How Eve unifies your entire programming stack hackernoon.com
118 points by tbatchelli  8 hours ago   49 comments top 11
klibertp 5 hours ago 0 replies      
> Eve is the culmination of years of research and development by the visionary team who previously founded Light Table.

...and then abandoned it after getting me excited about a possible Emacs replacement...

dicedog 6 hours ago 1 reply      
I'm excited for Eve and next-gen programming languages but as details emerge, it seems like it could easily have been a few libraries and an architecture pattern in most other functional programming languages. When I first learn about things like the continuation monad or CQRS, I have similar reinvent-the-world fantasies but it's often sufficient to expand my toolkit and change my style (in full disclojure ;-), my default language is clojure/script)
tharibo 15 minutes ago 0 replies      
Title : "your entire programming stack".

I feel sad when I find that web programmers think there are only web stacks and "programming" refers only to web programming.

tedajax 7 hours ago 2 replies      
Yeah until I really see some practical applications written in Eve I don't think I'm ever going to really get it. It's nothing against Eve, it might be great, but nothing I've seen about it so far as really captured my imagination and I'm wondering if I'm just missing something.
skybrian 7 hours ago 3 replies      
The part I'm most skeptical of is Eve's universal use of set-based semantics, whether it's needed or not. It seems like making sets and single values look different in the code would be more understandable than making everything look the same. Treating them as different types might be a good way to catch errors, too.

But SQL is very successful so maybe they'll do okay anyway.

quantumpotato_ 1 hour ago 1 reply      
I'm impressed with the abstractions! What are recommended "Learn Eve" tutorials?
sua_3000 4 hours ago 1 reply      
I feel like reconciling the divide between the client/server relationship is the next major opportunity for abstraction
dajohnson89 29 minutes ago 0 replies      
Seems cool
erichocean 6 hours ago 2 replies      
Can Eve call into C without any overhead?

If it can't, then "your entire programming stack" is excluding the kernel and a lot of existing libraries/code.

dkarapetyan 7 hours ago 1 reply      
How does one package and deploy an Eve application to some cloud infrastructure for example?
miguelrochefort 6 hours ago 0 replies      
I've been looking for the holy-grail for years (TodoMVC being my benchmark), and Eve takes the cake.

I'm looking forward to compatibility with semantic Web technologies.

Inspiring Young Writers with Minecraft edutopia.org
35 points by clbjnstn  5 hours ago   15 comments top 2
tomc1985 3 hours ago 5 replies      
I wish more people felt uncomfortable with such a close marriage between commerce and lower education. Get em while they're kids and you have a customer for life...
chungy 3 hours ago 0 replies      
This is a rather surprising development to me, I didn't imagine that this would inspire writers, but I quite applaud what Minecraft does for creativity at all ages!
Japan issues tsunami warning after magnitude 7.3 earthquake bbc.com
278 points by v4n4d1s  10 hours ago   86 comments top 16
euske 6 hours ago 5 replies      
This is a good opportunity so I'm gonna post what I, a native-born Japanese, have been always thinking: This country is literally shaped by earthquakes. This is true not only in a geological sense but it also applies to the culture. Earthquakes affect how all the buildings here are made, the way of transportation, and virtually every aspect of our daily life. We always fear them and talk about them, but we're kinda resigned to accept the fate. This concept of resignation is seen in many ways in the Japanese culture. But it's also earthquakes that make us truly united. I realized this when the big quake hit the country five years ago. As much as we hate them, we are defined by earthquakes.
cossatot 9 hours ago 3 replies      
Accroding to the USGS's page (linked to by civilian), the earthquake was a strike-slip earthquake, where two blocks of crust slide laterally relative to eachother on a vertical fault, with no real uplift of subsidence of the seafloor. It's unlikely that there will be a major tsunami, as these are caused by rapid displacement of lots of water by the seafloor. However, given the right topography along the fault, it is possible.
rdlecler1 9 hours ago 4 replies      
I was in Tokyo at my hotel when it happened. I thought maybe I had a Japanese vibrating bed for an alarm clock before realizing what was happening. I was on the 7th floor of a hotel and there was a slight sway for about 45 seconds. Given the duration I assumed it was fairly sizeabke but far away.
b_emery 10 hours ago 2 replies      
Apparently not a threat to Hawaii - Tsunami threat for Japan:



hccampos 10 hours ago 2 replies      
civilian 10 hours ago 0 replies      
USGS map: http://earthquake.usgs.gov/earthquakes/eventpage/us10007b88#...

//edit Any idea when the tsunami will hit? The original news happened at 20:59 UTC (1pm PST), but I'm not sure how fast tsunamis travel.

A sister comment referenced tsunami.gov, which is for US dwellers, but this NOAA website has more information for people living outside the US:http://ptwc.weather.gov/?region=1&id=pacific.TSUPAC.2016.11....

text: http://ptwc.weather.gov/text.php?id=pacific.TSUPAC.2016.11.2...

So, Katsuura was just hit, and the other cities will be hit soon.

huangc10 10 hours ago 2 replies      
(Edit) Japan Meteorological Agency has updated the earthquake to 7.4 magnitude.
dandelany 9 hours ago 1 reply      
Worryingly, TEPCO is reporting that the cooling system for the 3rd reactor at Fukushima Daiichi has stopped (via the NHK live TV stream)... Supposedly there is enough water in the pool that it will not be dangerous for quite awhile, but this needs to get fixed soon.
komali2 9 hours ago 0 replies      
The livestream is talking about how 2 methods of water cooling at the Fukushima power plant, but there's no immediate danger because of some third system. Anybody have more details on this?

"Right now the water temperature is 27 degrees and the water temperature will not rise to dangerous levels... for a while."

SeoxyS 2 hours ago 0 replies      
I was woken up by the quake this morning. Pretty strong feeling; it shook for a good few minutes. As far as I can tell everyone's safe in Tokyo, though the trains are a bit delayed and the elevators in my building weren't running.

Stronger by far than any quake I've ever felt in 8 years in San Francisco.

euske 8 hours ago 1 reply      
Tsunami alerts/forecasts:http://typhoon.yahoo.co.jp/weather/jp/tsunami/

They should really provide an English version of this page. Come on, Yahoo Japan.

ekianjo 7 hours ago 0 replies      
Was there this morning the shake was pretty long, several minutes. Its rare.
reddytowns 8 hours ago 1 reply      
I have a Japanese friend who said whenever a quake in NZ hits, the whole of Japan is on edge for the next month because Japan quakes often follow NZ quakes after a couple of weeks.

It seems to have happened again. I wonder why you don't hear anything in scientific circles about this.

oolongCat 9 hours ago 3 replies      
Everytime something important happens I notice reddit is a better source of getting an aggregate of information than twitter or (sometimes) news outlets.

Relevant reddit thread for this incident.


Also, /u/TheEarthquakeGuy should be posting soon.

criley2 9 hours ago 5 replies      
Out of curiosity... why is this on Hacker News?
SVG Line Animation for the Uninitiated bitmatica.com
131 points by evandenn  12 hours ago   20 comments top 5
roadnottaken 12 hours ago 4 replies      
Why are all the SVGs displayed as animated GIFs? Is there no way to display native SVG animation in the browser?
tkubacki 10 hours ago 0 replies      
"... Unfortunately Internet Explorer ..."

Every piece of reality has its own IE - Unfortunately.

amelius 12 hours ago 3 replies      
Can this also be done with a calligraphic pen? (I.e., pen with a slanted stroke).

I'm asking because this is a style that is used a lot in cartoons.

mxfh 11 hours ago 1 reply      
I kind of hate mixing SVG with CSS.

Somewhat relieved learn that SVG SMIL animations are staying in Chrome, for now:https://groups.google.com/a/chromium.org/d/msg/blink-dev/5o0...

puzzles 10 hours ago 0 replies      
I'm working on a project right now where I'm using this technique, but there is one thing that I've been wondering. Is there a way to make the line look more hand-drawn, or faded in some way? I've tried using SVG Filters but I can't seem to get it quite right.
NIO EP9 Fastest Electric Car in the World nio.io
9 points by zw123456  2 hours ago   4 comments top 3
barumrho 1 hour ago 0 replies      
The website doesn't really give a context, but I found this article to be helpful: http://www.wired.co.uk/article/nextev-hypercar-nio-ep9
Animats 33 minutes ago 0 replies      
Why does this site have Baidu ads? It IS an ad.

"Each car costs approximately $1.2 million to build."

code-monkey 33 minutes ago 1 reply      
One electric motor per wheel gives the possibility for some really awesome torque vectoring. I could see a machine learning model being used to figure out the torque to apply to each wheel given the feature inputs of turn angle, lateral g, suspension height, tire temp, road temp, humidity, etc.
Inspecting C's qsort Through Animation nullprogram.com
104 points by nayuki  11 hours ago   22 comments top 9
saretired 52 minutes ago 0 replies      
Sedgewick did animations of quicksort and other algorithms back in the 80's. He coincidentally did his PhD on quicksort under Knuth. Jon Bentley gave a Google Talk about quicksort and inputs that drive it to O(n^2) https://www.youtube.com/watch?v=QvgYAQzg1z8 His implementation of a production quicksort with McIroy is widely used. It's in BSD, glibc (non-recursive version--some here are saying that glibc qsort is a mergesort, but at least in the code I've read that's not true. Perhaps there's some confusion over the memory allocated by glibc qsort for the pointer stack it creates to avoid recursion). The paper by Bentley and McIlroy called ``Engineering a quicksort'' fills in many details that Bentley omits in his Google Talk.
morecoffee 4 hours ago 2 replies      
Frame Count != sort speed. Cache effects are going to dominate. How close together the red dots are will more closely represent how fast it is.
shric 3 hours ago 0 replies      
https://www.youtube.com/user/udiprod/videos has some fun/pretty sort animations that illustrate clearly the comparison and swaps separately.
mangix 9 hours ago 3 replies      
Surprised that musl I'd the slowest. Anyone have any insight into why qsort was coded like this?
kylepdm 8 hours ago 1 reply      
Any particular reason none of the implementations just do a random pick of a pivot? This usually is good enough to prevent the O(n^2) solution that the diet implementation runs into.
gabrielcsapo 1 hour ago 0 replies      
This is really awesome! I love seeing code visualized!
rhaps0dy 7 hours ago 1 reply      
Why does the BSD implementation compare with copies of the data outside the original array? Is there a performance benefit to doing this?
wfunction 7 hours ago 1 reply      
Why the hell does glibc use quicksort as a fallback with no apparent protection against the O(n^2) case? Aren't there easy in-place O(n log n) sorts? Like heapsort?
forrestthewoods 8 hours ago 0 replies      
I'd love to see this with different implementations of C++ std::sort.
Windows 10 Cannot Protect Insecure Applications Like EMET Can cmu.edu
66 points by doener  12 hours ago   26 comments top 7
ryuuchin 5 hours ago 0 replies      
You can control almost all EMET mitigations except for the ROP and EAF protections through IFEO (Image File Executable Options). There's also the cert pinning but I believe that was only useful for IE. There are also other Windows 10 specific mitigations that don't exist in EMET which can also be controlled this way. The main selling point of EMET was that it did not require recompilation. Luckily you can still control most of these mitigations through IFEO (see below) which does not require recompilation.

EAF uses debug registers which limits its usefulness and the ROP mitigations are becoming less useful because of CFG (control flow guard). Although the latter does require applications to be recompiled with the latest Visual Studio (and Opt-In to using CFG which is not enabled by default). It's not really surprising seeing Microsoft retire EMET considering you can get nearly the same kind of coverage on a vanilla Windows 10 install.

I made a rough guide as to the layout of the MitigationOptions QWORD which controls these mitigations:


There are Microsoft provided functions which can also enable these mitigations[1][2] when compiled into the code. Also lets not forget that for now EMET still works fine with Windows 10.

[1] https://msdn.microsoft.com/en-us/library/windows/desktop/ms6...

[2] https://msdn.microsoft.com/en-us/library/windows/desktop/hh7...

sergers 11 hours ago 2 replies      
down for me.

cache: https://webcache.googleusercontent.com/search?q=cache:e1CDpJ...

not sure when it was original posted, as its noted there is an update today Nov 21st 2016, and that is the same date of the article.

basically, Windows 10 doesnt use EMET, and MS claims its because Windows 10 has other mitigation techniques making it more secure.however, as per the article, there are many mitigation steps not included, and many require application to be compiled specifically for EMET replacement mechanisms.

the update to the article today is Windows 10 support more than previously in latest release, however still doesn't support everything EMET provides.

rincebrain 8 hours ago 1 reply      
My initial interpretation, when I had heard about the EMET EOL, was that Microsoft was doing it as a way to spin removing dev effort from EMET into leveraging people onto Windows 10.

Now I'm not sure - Windows 10 doesn't have the full featureset, and I don't _think_ Microsoft is likely to actually introduce the entire featureset into Windows 10 with much lead time before the EOL.

If they do, though, it would certainly be a nice carrot AND stick to get people up to at least a certain update version for that functionality.

reiichiroh 8 hours ago 0 replies      
Palo Alto Traps also covers anti-exploit but I expect that this functionality is something vendors will be building into their upcoming security suites.
ninjakeyboard 12 hours ago 1 reply      
I had to read the headline 7 times to understand the message.
reiichiroh 8 hours ago 0 replies      
If you're looking for a commercial consumer product that blocks a superset of the exploits blocked by EMET, try HitmanPro Alert.
drzaiusapelord 11 hours ago 2 replies      
The latest major Windows 10 update added more EMET features. I imagine by the time EMET is retired, it'll have everything. Retiring EMET seems to be another underhanded trick by MS to get everyone off Win7 and onto 10. Enterprise that depends on it for security will be forced to move sooner than planned, or at least, not be allowed to skip 10 as by the time Windows 11 comes out, 7 will be out of support for quite some time.
Bye Bye Emojis: Emacs Hates MacOS lunaryorn.com
140 points by csmonk  5 hours ago   73 comments top 24
jordigh 3 hours ago 5 replies      
As one of GNU's flagship products, why shouldn't Emacs remind users that it is not in agreement with Apple's treatment of its users? GNU has an agenda, just like Apple does. Intel cripples icc for non-Intel CPUs, making it emit pessimised assembler. Apple makes sure that its hardware only works with other Apple products and that macOS can only be legally virtualised on Apple hardware. Everyone has an agenda.

The difference is that GNU is a lot smaller and has a lot less power and resources than Apple and Intel. So much that it is relatively easy and, in fact, explicitly allowed by the GPL, for someone like Yamamoto to come along and decide that, by golly, Emacs will have unique macOS-only features and Apple deserves to have more money to do whatever it wants to its users because its users enjoy Apple's treatment, smelly GNU/Beards be damned. This is a lot easier than fixing icc for AMD CPUs or putting headphone jacks into iPhones 7.

> We are not welcome, and never will be.

You are welcome. You are very welcome. Apple is not. You should not identify yourself with Apple's operating system.

vilhelm_s 4 hours ago 4 replies      
And several other Mac features were stopped before they made it into the official release. This is Richard Stallman's policy: "GNU Emacs should never offer people apractical reason to use some other system instead of GNU. Therefore, when someone implements a useful new feature but only for a non-GNU system, we do not accept it that form."



sjm 3 hours ago 1 reply      
This kind of thing is exactly why I will always support and use Mitsuharu Yamamoto's macOS Emacs port (https://bitbucket.org/mituharu/emacs-mac), which has been consistently rock solid and committed to implementing nice to haves on macOS that this sort of political bullshit holds back on official development.

Homebrew tap here: https://github.com/railwaycat/homebrew-emacsmacport

astevens 45 minutes ago 1 reply      
20 years ago the FSF had some very forward-looking ideas. Now we have the ornery opinion of old men - it was good enough for us two decades ago, it's good enough for you now.

The point of free as in speech and not free as in beer is that the choice you make does not need to be right or wrong by the standards of other humans. Holding back technology because it's not available on "your" platform is just as monopolistic as any corporate entity they have butted heads with.

burke 1 hour ago 0 replies      
Here's the diff:

 - /* Don't use a color bitmap font unless its family is - explicitly specified. */ - if ((sym_traits & kCTFontTraitColorGlyphs) && NILP (family)) + /* Don't use a color bitmap font until it is supported on + free platforms. */ + if (sym_traits & kCTFontTraitColorGlyphs)

failrate 2 hours ago 1 reply      
As a product developer, maintaining a consistent feature set across OSes makes perfect sense to me. It is a struggle to get consistent behavior across different versions of Windows, let alone entirely different OS types.
cyphar 3 hours ago 1 reply      
> MacOS users we will always be second-class citizen in Emacs land3.

GNU Emacs is part of GNU/Linux. Why are you surprised that [other OS] is a second-class citizen for a project which already has a clear OS target.

Besides, the guidelines for GNU packages clearly states that GNU packages cannot emphasise features of proprietary OSes. So it's not really the maintainer's fault, it's one of the rules for GNU packages (you can blame the FSF, but you've got to look at it from their perspective).

GNU packages are different from other projects, because they're actually part of an OS. So they have an obligation to support the OS they're a part of more than other operating systems (especially proprietary operating systems).

beefsack 4 hours ago 3 replies      
It's easy to see this is but one side to the story from the level of emotion in the blog post. Does anyone have more information regarding the decision itself?

The post also misses the second half of the paragraph, which suggests a method to get emoji working again:

 If some symbols, such as emoji, do not display, we suggest to install an appropriate font, such as Symbola; then they will be displayed, albeit without the color effects.

dmitrygr 3 hours ago 1 reply      
I wish there was a way to force other applications to disable color emjoi.
strangelove 2 hours ago 0 replies      
,,Lets sink this in: The Emacs developers deliberately disabled a feature that was working perfectly fine for MacOS users just because it is not available for free systems1. What a daft decision.''

- well, right now, it's just bloating Emacs for the rest of the world. If one needs it on MacOS, I'm sure it can be added it to a personal installation.

kalleboo 3 hours ago 0 replies      
The last time Emacs came up here, there was some discussion about Stallmans obsession with free OS purity and some separately-maintained macOS releases were suggested https://news.ycombinator.com/item?id=12832486
confounded 3 hours ago 0 replies      
Emojify[0] provides all the emoji support I seem to need on both Ubuntu and macOS. It uses non-proprietary emojis though. If that's unacceptable, it can take arbitrary directories for emojis[1].

I must say, Emacs still runs better on macOS than Xcode does on Linux.

[0]: https://github.com/iqbalansari/emacs-emojify

[1]: https://github.com/iqbalansari/emacs-emojify/issues/19

_ph_ 1 hour ago 0 replies      
It is a strange interpretation of freedom, if you are free to run the software how you want, except if RMS disapproves your OS. I think this way of "defending free software" is rather counterproductive. I say this as a strong supporter of free and open source software and both a Mac and Linux user. I want Linux or any other free operation system to be strong in the market. But this should be reached on features and capabilities, not on limiting software on other operation systems. Fortunately thanks to its open source nature, it should be possible to maintain a branch of Emacs for the Mac which keeps this feature alive. This is where open source is strong.
gok 3 hours ago 2 replies      
"Therefore, when someone implements a useful new feature but only for a non-GNU system, we do not accept it that form."

Users of non-GNU systems that use GNU software: the FSF is actively trying to make your life more difficult.

waynecochran 1 hour ago 0 replies      
Emacs-Gud Will never support llvmdb either... sad...
Animats 27 minutes ago 0 replies      
Does anyone really need colored emoji in Emacs? It's a programmer's editor.
digi_owl 2 hours ago 1 reply      
And here i can't grok the appeal of emojis outside the tween girls segment...
nephrite 1 hour ago 0 replies      
I wish all emojis just disappeared. They're stupid, don't add any value, waste unicode space and developers' time.
no_protocol 4 hours ago 2 replies      
Can you just recompile it with that feature enabled?
parenthephobia 3 hours ago 0 replies      
FSF policy is that GNU software should not have features that will only work on non-free operating systems. The decision to remove multicolor glyph support was made on that basis.

The FSF don't want to make their software better on non-free operating systems which, given their goals, doesn't seem particularly unreasonable to me.

jboogie77 4 hours ago 0 replies      
vim 4 life
disposablezero 3 hours ago 0 replies      
atom probably has emacs emulation
imjustsaying 4 hours ago 1 reply      
delete (this comment)
bitmadness 3 hours ago 0 replies      
Absolutely sickening.. Stallman needs to leave the crib and join the real world.
Markdown-in-js: inline Markdown for React and JSX github.com
77 points by kylemathews  13 hours ago   28 comments top 7
Touche 12 hours ago 3 replies      
This is a great illustration why using random babel plugins you find on the internet is super dangerous.

This works because it assumes you have named your local identifier either `markdown` or `md` and are using that. If you want to use something else you need to specify a custom pragma: https://github.com/threepointone/markdown-in-js#pragma. And that feature didn't exist until 6 hours ago.

Why is this dangerous? Let's imagine you use `markdown` and you chug along for another 5 months on this project and you need to use `mdown` in some file, or maybe a coworker uses `mdown` because why not? So it doesn't work. And you're confused. And you spend hours trying to figure out why your app is broke.

People need to realize that by using a bunch AST transforms you are creating your own one-off language, and one that has no visual indicator that this is different in any way. It's just a tagged template literal, right? Stick to babel plugins for official language features that at least have a chance of becoming part of JavaScript. This thing is going to stop working as soon as the developer loses interest, and then you're going to be stuck with your Frankenstein language no one can understand.

threepointone 12 hours ago 2 replies      
Hi folks, author here. I wrote this for myself for a docs site I'm building, and didn't want to ship a markdown parser to the browser. It's mostly for aesthetics, without sacrificing performance. Custom components for primitives are a nice addition, especially for styling.

I don't know whether it's a good idea, but I like it so far.

nathancahill 12 hours ago 1 reply      
I'm all for pushing the boundaries of syntax with Babel, but frankly this looks horrible. Is there an appealing reason to use this? I could see it being interesting to render Markdown (like from a CMS) at runtime, but at compile time there's not much benefit over JSX right?
andreasklinger 13 hours ago 2 replies      
Maybe a stupid Q - i assume there is a good reason - most likely related to syntax clash

Why is this not a component?

as in:


b34r 2 hours ago 0 replies      
bArray 11 hours ago 0 replies      
Slightly related, I wrote some basic JS a while back to convert a page full of markdown into HTML client side, without locking up the client:

Original - http://coffeespace.org.uk/loader-orig.jsMinified - http://coffeespace.org.uk/loader.js

Why? I don't know, it seemed like a great idea at the time to get my clients doing all the heavy lifting. Page download times are typically less than all other pages given how rich the content is.

true_religion 13 hours ago 0 replies      
Is there a demo?
Learn the basics of data science with these books hackernoon.com
55 points by becewumuy  5 hours ago   4 comments top
rcar 2 hours ago 2 replies      
Would just throw an extra plug for Python for Data Analysis. Though the title might sound a little bland, it's a good, practical summary of how to use pandas for the sorts of data analysis you often have to do in data science work.
Apple Abandons Development of Wireless Routers bloomberg.com
345 points by james_pm  18 hours ago   501 comments top 73
taurath 10 hours ago 9 replies      
What I miss most about Apple was that Apple was the company that made reasonable decisions for you, when it came to security and all those technical details that was/is endemic in any computing. They had Opinions with a capital O - and made sure that every product you'd buy would be guaranteed to work with other products. If you had the money, you could focus on what you were trying to get DONE rather than spend forever learning about the implementation details. Abstract people from the hardware - focus on the use cases.

Now it seems they're leaving people who depended on that behind. No company offers an ecosystem that doesn't require "fiddling" to get things to work correctly. Maybe this is the way it has to be, but I really wonder what Apple's strategy is going forward, because its clear that they've slowed down or stopped development on everything other than their phones/pads and the occasional laptop. What are all their engineers doing? What is the use of having hundreds of billions in the bank if you're not investing it in growing or creating product lines?

appleiigs 16 hours ago 5 replies      
For the past decade I've been blindly buying Apple products (but been using Macs since late 90s), while looking at alternatives with disdain. Paying the Apple premium allowed the tech to get out of the way. Very happy to continue paying, but it seems like Apple is forcing me to leave. In this relationship, I'm the one getting dumped. I'm not sad about it, but from a business perspective why they are actively pushing me away? Canceling product lines (routers, mac pro), not updating the active ones (mac mini, iMac), and being difficult with the products they do update (MBP, iPhone).

On a relative basis, Apple has infinite resources. It has cash, the brand and can attract the right people to run the business. Each product line, like the routers and mac pro can be focused on because they have the resources to do it. Most companies re-focus on core products because they are spread too thin - Apple is not.

joakleaf 8 hours ago 3 replies      
So what is Apple working on now?

iPhone 7 looks like iPhone 6 sans the headphone jack. So no design changes for 3 years. To me, 7 does not feel like a significant update over 6.

Macbook Pro got a touch bar. Otherwise, minor design changes since last revision. Does not feel like a significant update.

iMac not updated for 12 months. No significant design changes for years, but the screen resolution is now Retina.

Macbook Air not updated since March 2015 (still low resolution). No significant design changes since introduction.

Mac Mini not updated since 2014. No design changes since 2011.

Mac Pro not updated since 2013.

iOS 10 and macOS are minor revisions.

Thunderbolt display and now Airport extreme/express are dead

The iPad Pro 9.7" looks like iPad air (1 or 2). iPad pro 12.9" looks like any existing iPad but bigger. The iPad minis all look alike.

They didn't release new iPads this fall. Isn't that a first?

It feels like the hardware line-ups are getting more confusing: Two different iPad sizes called Pro as wells as "Air 2" and the minis. It made sense that the Pro was the largest one, but they confused us by releasing a smaller Pro that looks like an Air 2, but has a better display than the large Pro. How many iPads do we need?

There is the main iPhone line (... 6 6S and 7) that comes in two sizes, and then the evil cousin called iPhone SE which looks like a 5.

The laptop line is getting more messy too. The Macbook is like a slower Macbook Air but with higher resolution and 12". They killed the 11" Air, but we now have 3 laptops at 13" (Air + two types of Pro). Is the 13" Pro without touch bar option really necessary?

All these series ("", Air, Pro, SE, Mini) which pop in and out of existence feels like they are trying different names for marketing reasons (especially for the iPads).

I appreciate the yearly impressive but predictable CPU/GPU and software improvements, but it is really starting to feel like they are either struggling a bit, or working on something that takes a lot of resources from non-essentials and focus.

nodesocket 8 hours ago 1 reply      
I see a major problem at Apple. They are being run by committee. Tim Cook is not a visionary or entrepreneur at heart. He is a logistics guy.

The best run technology companies are run by dictators with a strong technology and product background (Jobs, Elon Musk, Reed Hastings). Let me say it again... Yes dictator. You are never going to get a large group of people to agree on something, if you do it's watered down, compromised, and lacks ambition.

phs318u 3 hours ago 1 reply      
I can relate to the sadness of those witnessing the slow-motion atrophy of an epic company. Apple have been the company that "did it right", that made it "just work" and got out of your way, all while looking ridiculously beautiful, without compromising on engineering quality and with sensible security baked in. Once upon a time I thought/hoped Apple would by Sun - what a world that would have been!

Given that's no longer the company we're being left with, is there a window of opportunity for a new entrant to step in and start filling that void using the same principles of design and cohesion that Apple have made famous? A sort of "Apple for Nerds"[1].

[1] BTW, this is not Google, nor should it be any company where the "customer" is actually the product.

falcolas 17 hours ago 1 reply      
This restriction of their efforts on only the most profitable offerings like this seems like a continued step backwards for a company which used to do so much innovating. If all you work on is the iteration and merging of existing devices, you're going to be left in the cold as you're out-innovated by your competitors.

This is what happened to Microsoft, and it took many years and a major internal upset to get them back on a positive track. And now look at Microsoft since they've started diversifying and innovating again: they are providing an OS (and hardware) which is genuinely interesting to professionals in a variety of fields. They are going to steal Apple's thunder here soon, unless Apple really makes an effort.

electic 8 hours ago 1 reply      
I think what you are witnessing here is a shift in demographics within the company. When Jobs was around, product was king. When engineers said we needed these components, the business folks would figure out how to make the economics work.

Now what we are witnessing is the product people are not in charge. The business folks are. So you start to see them tell the product folks they can't do something because it is not cost effective and that is profound.

symfoniq 9 hours ago 1 reply      
It's kind of funny:

I recently decided that Apple's ecosystem no longer works for me, and that I'm gradually going to start putting most of my tech eggs into other baskets.

And whaddaya know? Between the monitors, the wireless routers, and the high-end professional workstations, Apple seems to agree with me that their ecosystem no longer works.

I guess it's nice to have some confirmation.

abakker 17 hours ago 10 replies      
Ugh, this is the first news that I've seen that I feel is really bad. Monitors were ok, that seemed understandable since they were for professionals to go with the mac pro. Apple's monitors were at best "prosumer".

Wifi though has always been a very big PITA for consumers, and Apple's hardware/software integration has always been a better bet for ease of configuration, and honestly, reliability.

The optimist in me hopes that maybe they'll have something better for us, or are making an acquisition to replace their current product lineup completely.

The pessimist in me thinks that maybe they're leaving this market to avoid needing to develop hardware that meets its publicly stated standards for protecting consumer privacy. Potentially they have been approached/mandated to enable some kind of backdoor in it, and they chose to stop producing it, rather than comply. /tinfoil_hat

kalleboo 17 hours ago 2 replies      
Third-party routers have gotten a lot better in reliability since I got my first AirPort (when getting a consumer router that would do 100 MBit NAT routing was nearly impossible, and my parents had one of those lamp timers on their ADSL router to reboot it every day at 3 AM).

I felt this coming as long as Apple never added iOS backup support to the Time Capsule. The APFS migration seemed like it would be the final bullet.

Had to ditch my latest-model Time Capsule when I got fiber since Apple doesn't let you change the MTU (required for my PPPoE over fiber), and getting faster Time Machine backups than Apple's anemic CPU could muster was just a plus. I was hoping to reuse it as an 802.11ac bridge to my TV/Media center, but, nope, Apple removed wireless bridging as a feature a couple years ago.

Good riddance.

edit: just got reminded they spent a bunch of money developing special paper and ink for a $300 book instead of this. OK, yeah, no there's logic here.

jarjoura 9 hours ago 0 replies      
The problem with Apple's internal culture is that even though engineers are brilliant and fully capable, products are designed from the top down.

Apple employs people who are a very passionate about wireless technology. VERY passionate. They just don't get the resources or freedom to take their products to the next level. They have to wait for someone from above to be sold on the idea. Yet, it never happens because the engineers are told to keep fixing bugs and keep things humming along. File a radar, and keep doing your job.

Probably what happened, a few senior engineers got yanked onto the Watch project or this fabled car thing. They worked and worked and worked on it, and left the junior guys fixing macOS/iOS airport bugs. Then when it came time to build a new revision, they noticed suddenly they were WAY behind the market. Meanwhile, engineers continued fixing macOS/iOS bugs, this time for 3rd party systems, and bam, someone in upper management probably asked, "umm, why are we still building our own thing?" They probably merged the OS wireless teams with the driver team and called it a day.

So here we are today, Bloomberg most likely got a tipoff from a disgruntled employee who didn't like they were killing the project.

lsadam0 17 hours ago 2 replies      
I'm all in the Apple ecosystem, but I'm starting to feel pushed out. First the Mac Pro, then Displays, now this? Is Time Capsule next? My airport extreme is the most reliable home router I have ever owned.
oceanswave 14 hours ago 1 reply      
So I've ignored all the chatter of the headphone jack and the MBP and chalked them up to being in a transitionary period both for wireless and USB-C.

But this move seems to indicate a fundamental misunderstanding of the Apple ecosystem. Starting with the WiFi in a consumer's house, today, the Apple 'System' provides you with the entire experience from simply using your iPhone/iPad to easily playing your music on your speakers through AirPlay to watching content on Apple TV to backing up your computers via TimeMachine and providing remote access through Back-To-my-Mac.

Further, the devices provide guest and "Mesh" like functionality to have an extended WiFi network -- before mesh wifi technology existed.

All this and easily configured though, most appropriately, an app.

Shutting down this most fundamental base functionality of AirPort to me is a signal that Apple really doesn't want to be in the business beyond iPhones and iPads.

This coming at a time when the competition is actually ramping up providing WiFi devices (Google WiFi) and providing something that Apple has had for years.

It's mind boggling.

gchokov 17 hours ago 3 replies      
That's just too sad. I was thinking recently how my 3 year old AirPort express was the best router I've owned. These routers were always super easy to configure and I remember maybe one reboot in its lifetime... it powers:2 MBP pros;2 iPhones1 Apple TV 1 Roomba 980 cleaning bot1 Footbot - Air quality monitoring 1 Apple Watch1 Sony PS4... everything working flawlessly

Sad day for me.

mikekij 16 hours ago 2 replies      
How does Tim Cook not see that the ease of use of their entire ecosystem (from routers, monitors to software utilities) is what justified spending twice as much on a mac or an iPhone as a competitor's product? Thinning that ecosystem makes their hardware premiums much harder to justify.
twblalock 6 hours ago 1 reply      
I'm on Apple's side on this one.

I know a lot of people who have worked, or currently work, at Apple, and a lot of other Apple fans who buy almost all of their technology products from Apple. I don't think any of them ever bought an Airport.

Airports were a lot more expensive than most consumer routers, and ease of setup is not a huge differentiator. With most routers, you go through the pain of setting them up the day you buy them, and that's it. Even non-technical don't seem to have too much trouble setting up their generic consumer routers.

I would much rather see Apple focus its engineering resources on a good iPhone 7 than waste them on a wifi router that is not nearly as important to the company's ecosystem and revenue.

anexprogrammer 17 hours ago 0 replies      
I'm less bothered by an exit from routers than monitors and the other changes. That said they were nicely implemented and reliable.

All of the current changes are taking away.

Aesthetics were and are a huge part of the appeal of an Apple filled desk. Not a one of the other makes gets close yet Apple haven't even asked LG to make their 5k screen look nice or even complementary.

After the underwhelming MBP with rubbish travel-free keyboard, and not having a monitor to sit alongside my iMac, they seem incoherent. Where's the new Mini, Pro, 34" curved Cinema screen or iMac?

We just need someone else to discover aesthetics and they have a real problem to contend with. I care what the overall look is of things in my home.

brianbreslin 17 hours ago 1 reply      
I think its harder and harder for Apple to charge $199 for a router when you can get a modem/router combo for $70, or "free" from comcast. The margins on these devices are declining fast. Additionally Google just released their router, and netgear/linksys etc have been dominating that market for ages.

Why allocate the manpower when the others on the market have caught up with ease of use and this generates less than 1% of your revenue?

IgorPartola 10 hours ago 1 reply      
It's 2016 and Wi-Fi is still a pain. Security is non-trivial, firmware is not patched, joining a network is complicated, extending coverage is very tricky.

My solution has been to use a TP-Link router running OpenWRT as the router, and a UniFi as an access point. I run the Ubiquiti controller on a separate server. This gives me very good performance and coverage, but the ease of use is zilch. It requires me to know way more than an average person should need to know just to get the network set up.

I have never used the Airport routers from Apple, but I understand that they aimed at fixing a lot of these issues. Ultimately I chose not to go with them because they were still not a turnkey solution, while not giving me nearly the performance (speed and coverage) I wanted.

I wonder who will be the go to recommendation from now on.

nerdwaller 8 hours ago 1 reply      
This is pretty disappointing. I'm not an Apple fanboy, but after I went through a new router every 6-8mo for two years (none of which were low end) - I decided to try the Apple one. It's now been at least 3 years or so with zero issues.

Here's to hoping AC is good enough for the foreseeable future. I'm presuming it will be for the average joe.

nodesocket 8 hours ago 1 reply      
As an Apple shareholder, and very happy user of Time Capsule this is extremely disappointing. My time capsule is the best value for a solid wireless AC router with 2TB of disk backup which I use to backup my iMac and MacBook Pro with Time Machine.

Can anybody name a solid alternative for $299?

lowbloodsugar 10 hours ago 0 replies      
Seems like Apple has missed the point. I buy expensive devices for the ecosystem, and that ecosystem includes the routers (of which I have four in my house). I also bought the expensive devices because they would expand to meet my needs: my 2011 17" MBP came with 8GB RAM and now has 16GB. It now appears that Apple is the iPhone company. Someone has looked at the numbers and decided that everything else is tiny by comparison. Except its only tiny by comparison to iPhone.
noir-york 17 hours ago 2 replies      
Another sad decision by Apple. My Airport Extreme and multiple Expresses worked great. Loved the Utility app to help configure them - handy little tool.

With Apple's resources the router division cannot have been a distraction, nor would not having it materially or even to any large extent, affect Apple's numbers. Maybe this reflects a retrenching mindset taking hold within Apple?

Negative1 8 hours ago 0 replies      
My Airport Extreme finally died a few weeks ago and I replaced it with an OnHub (the TP-Link one). Couldn't be happier -- my Wifi range extends well beyond the perimeter of my house, speeds (appear) to be faster with less connection issues _and_, I can monitor router activity and other stuff directly from their App (though that is the one area it could use a few more features).

Google is definitely winning some points in the "simple and works" category. Pricey but worth it, IMHO.

mindcrash 14 hours ago 3 replies      
In case anyone is looking for replacements: Ubiquiti makes some really great professional grade stuff (in regard to design and hardware).

I would recommend pairing a couple Unifi AP AC Pro Access Points (with a maximum throughput of around 1.3Gbps on the 5Ghz band) with a Unifi Security Gateway and for most scenarios you will be done and probably never look back. If you really are into networking you could also take a look at their EdgeMax series which has some router features the more simple Unifi products do not provide (like a shit ton of routing features, and the ability to hook it up directly onto a fiber cable using the provided SFP port)

Take a look at https://www.ubnt.com/ if you are interested. Hope this helps :)

carsongross 16 hours ago 0 replies      
The bean counter mentality ruins product companies again and again (see autos) and yet even a company as powerfully idea-driven as Apple falls victim to it.

I was skeptical of the predictions of apples implosion once Steve was gone. How important can one guys contribution be?


matthewmacleod 17 hours ago 4 replies      
That's annoying. Airport routers have been the only ones I've bought that don't randomly quit working or require periodic restarts.

Time to look at something like Ubiquiti I guess.

laughfactory 2 hours ago 0 replies      
I'm astonished to hear Apple is abandoning no longer developing monitors and routers. The only way to understand their behavior is from the accounting and finance side. Someone in accounting/finance must've pointed out that they don't make a lot of money making and selling routers and monitors and such. So it seems like an easy win to drop those lines of the business.

BUT, I suspect this will prove very short-sighted in the long run, and possibly endanger the company as a whole. The one thing Apple is (it seems) overlooking is the ecosystem effect. While some may not buy Apple monitors or routers, many do. Sure they last a long time and you don't need to constantly buy another one, and there may be viable substitutes, but if you're one of those people who considers themselves Apple-only (I have a non-techy friend like this) then eliminating Apple products just reduces their attachment to all things Apple. Pretty soon they start realizing that, hey, this non-Apple router works just fine! Hey, this non-Apple monitor works just fine, too! And the importance of the brand is reduced. Then they inevitably start thinking about how maybe a non-Apple laptop might be just as good (too) and cost a boatload less. Or maybe the new Pixel phone is as good as their aging iPhone. After all, they've heard good things about the camera.

So, yes, in strict accounting terms it may make sense to eliminate product lines with thin margins and low volume, but on the whole it props up the whole brand. Personally, I like knowing that I could go buy an Apple monitor. I know they're exceptional quality, and beautiful to boot. I may not have the cash to do it now, or even soon, but it's something to look forward to. Same with their excellent routers. But now? I guess I'll have to keep an eye on what Microsoft is up to.

It's ironic to me that Apple has lost sight of what supports and nurtures their brand, while it seems Microsoft has discovered Apple's secret sauce. Microsoft is creating beautiful expensive products which are almost certainly thin margins and low volume, but is doing so because of the cachet that comes from doing so. From the perception that the Microsoft brand is associated with beautiful, high quality, electronics. This rubs off on everything else they do. And man, do those new Surface Studios look nice.

I've been gradually moving the whole family to Mac-land, but now I may have to rethink. It seems Apple has lost touch and is now run by accountants and analysts--not designers and engineers.

scarface74 5 hours ago 1 reply      
I don't see any reason for the AirPort. I can only think of a few scenarios with respect to consumers and Wifi:

1. The router they get from their ISP is good enough, they can get support for their internet access and their wifi from one place and they don't mind paying the $7-$10/month. This seems like it would be the easiest for the non-tech user.

2. A combined router/cable modem is desired but you don't want to pay $7-$10/month. Buy a device that is compatible with your cable modem -- again something that I don't see Apple making.

3. You want a separate ISP modem and more advanced wifi router -- the only case that an updated Airport would satisfy.

4. You want an easy to configure mesh network for better coverage -- buy some Eero devices.

5. You want a versatile travel router -- one that can serve as a regular router (ethernet -> wifi bridge), a wifi->ethernet bridge, a wifi->wifi bridge (to create a private network from a public network), or an extender. There is already a $30 router that can accomplish that. I have one of these:


qwertyuiop924 16 hours ago 1 reply      
Good riddance. I've had an AirPort for years, and the thing never worked right. If you're all apple, things are fine, but if a Linux or Android device connects to your network (I use both), you can forget it: The connection will be awful, and it will slow down and drop constantly.

And yes, it actually is the AirPort, as said devices actually work fine on other networks.

mahyarm 11 hours ago 2 replies      
When are we going to get an airplay express now the airport express is going away. And I don't want to maintain a raspberry pi or deal with some airplay speaker's bad airplay implementation.
gigatexal 13 hours ago 0 replies      
I hate this. One of the draws for me to Apple has always been their vertical integration: everything made by Apple (> 90% of the time) just worked. I never had any issues with my airport extreme and was looking forward to what cool things they would do with them: mesh networking, integrating home automation points? Fuse them with Apple-TV? Alas, just put all your eggs into the Iphone I guess.
nathanvanfleet 17 hours ago 1 reply      
Wow, so no more simple Time Machine backup product, and no more to stream music to (maybe they couldn't accept that they had a headphone jack?). This sounds like it's actually quite the departure from their previous strategy to have a digital hub with numerous features about the home.

I honestly think they have become myopic. They aren't seeing the big picture of what some of the less profitable products and features are accomplishing. They just aren't making enough income.

kimshibal 7 hours ago 1 reply      
I have a friend at Apple. The wireless team will be moved to Dongle department.
tlrobinson 17 hours ago 0 replies      
"sharpen the companys focus on consumer products that generate the bulk of its revenue"

I'm sure Apple of all companies knows this, but revenue of a particular product doesn't necessarily show the whole picture when some of your brand's appeal is that customers can buy all Apple products and be fairly confident they work well together.

Separately, I'm very happy with Ubiquiti products, but I'm also a power user.

dvcrn 7 hours ago 0 replies      
This is sad. I loved Apple routers and hoped for an upgrade anytime soon.

I know there is enough hate towards Apple here and I will probably still buy their new 15" Macbook, but I'm hoping for either Apple to get back to what they did before, or some other company stepping in and taking over the things Apple stopped doing.

Looking at Google for example: Started building high-end phones (iPhone), released a pretty good router (Airport), actively pushing chromecast (AppleTV). What if Google were to make a powerhouse of a laptop that's not running ChromeOS? Could that be viable?

The next Macbook I buy will hopefully last another 5-6 years, or hopefully longer. So for me 5-6 years for Apple to build something truly impressive. Who knows? Maybe Apple really does have a broader vision that we just don't see yet.

ausjke 3 hours ago 0 replies      
Really do not like the ISP routers, that bundle the DSL/Cable-model with a Wireless-router together. Just let the modem be modems, and let the router be router, this way you can use any router model you prefer and do not need get stuck with the ISP ones.
usaphp 17 hours ago 0 replies      
if you think about it - they've abandoned monitors, routers, Mac minis, MacBook airs, iPods, what the hell are all those people who worked on these products are doing now? I can understand if they had some other great products in the works but Mac Pro has not been updated for a while, iPhone and MacBook updates are pathetic, looks like Apple is now focused more on marketing and social buzz than creating actual products...
nashashmi 17 hours ago 1 reply      
The Post-Jobs Apple:

- Steer Apple towards its more established areas of expertise, where margins and competitive advantage is high, and the rest of the competition is dismal;

- Clip non-innovative departments where purpose and identity are lost;

- Concentrate resources where Apple's leadership is comfortable in;

- Sunset all else.

Razengan 2 hours ago 0 replies      
I wonder if this has anything to do with agencies like the NSA "requesting" Apple to put backdoors in their future routers..
tomovo 7 hours ago 0 replies      
I wonder what would happen if Adobe released Photoshop for Linux. Or if Ableton for Linux was released. I'm pretty sure those would make them try A LOT harder on the desktop front.
post_break 17 hours ago 1 reply      
Displays, wireless routers, next headless macs?
shmerl 4 hours ago 0 replies      
I always used routers that allow installing customizable OS, so Apple was never a good choice. Something like Linksys WRTs today are way more interesting than whatever routers Apple made.
jstsch 7 hours ago 0 replies      
Hmm, since there is currently a great Airport with 802.11ac support out there, what would Apple develop right now? Might it be that they'll come out with a new product when 802.11ax is due to ship, which might be a few years down the road?

For the rest, I'll repeat what everyone says. The Airport is an essential part of the Apple ecosystem. It just works. I'd be surprised if they actually drop the product.

mrlambchop 7 hours ago 0 replies      
My assumption is that voice assistants (alexa, google home) and mesh network access points (ubiquiti, eero, google wifi) will fast converge soon into a single device.
rxlim 15 hours ago 0 replies      
According to the NetBSD Wikipedia page[1] the firmware in Apple wireless routers is based on NetBSD.

I have heard a few people say that their Apple router has been very reliable, so maybe it has to do with it running NetBSD, or at least I would like to think that is the cause.

[1] https://en.wikipedia.org/wiki/NetBSD#Examples_of_use

rconti 13 hours ago 1 reply      
They already abandoned my Airport Express.

It's too old and can't be managed with the new version of the Airport Utility. The old version (which was very difficult to find) wouldn't run on a modern MacOS.

Fortunately I have a 10 year old Mac Pro and was able to download the old version of the software and make it work, but it's just not worth the effort every time I have to reconfigure it. IMO the Airport Utility software was already pretty wonky, it was a bit confusing to try to connect to the unit. You'd have to do a few resets of the device before it would show up.

Once it works, it works GREAT though.

Oh well, I've got all Meraki gear in my house and it works flawlessly. 4k streaming over wifi, no problem. That said, I was lucky enough to buy a house wired with cat5, so all of my bandwidth-hungry devices are wired anyway.

Ubiquiti is probably the way to go for most people though. Get rid of the consumer junk.

digitalneal 16 hours ago 0 replies      
From my point of view, the market is saturated with all these startups offering mesh wifi services that are already "Apple-fied" in terms of simplicity to setup and activate.

Why bother developing in that space when you can just buy whichever startup matures to the largest market share? Throw those engineers somewhere else.

mcgrath_sh 15 hours ago 1 reply      
To those asking about replacements... I built a home router with about $150-250 of parts, an old, small in space HD, and an old comouter case I had laying around. I have wireless access points on 2/3 floors (old Cisco router for the basement, an N ASUS router for the rest of the house). I used PF Sense on the box. I have wired internet on as much as I can and wish I could use it for more. I don't understand the desire for everything to be wireless. My TVs, desktops, gaming consoles, etc. do not move.

I have never been happier with my internet setup. The only time I have had issues was when FIOS was out. The only restarts I have had to do was when my power went out. It has been close to two years now and this is the least amount of maintenance I have ever had to do.

phmagic 16 hours ago 1 reply      
I'm not sure if this is good or bad news for companies like Eero. In the short term, this is great news because of less competition for the extremely easy to use Wi-Fi router, but I have wonder why Apple got out of this business since they often see technology trends years down the road.
adolph 13 hours ago 1 reply      
Hopefully they will open up PowerNap backups [1] to non-Apple hardware. For my next home router I'd like to go the scooter computer [2] route, but having to have the MacBook open and awake to back up is obnoxious.

1. https://support.apple.com/en-us/HT204032

2. https://blog.codinghorror.com/the-scooter-computer/

JamiePrentice 15 hours ago 0 replies      
I love the reassuring sound of my hard drive in my Airport Timecapsule spinning up when I got back to my desk at home.

The Timecapsule was pretty much plug and forget, easy to use. Now when it comes time to retire the Timecapsule I'll have to look outwith the Apple ecosystem and find something else, tinker with its configuration (more than likely for hours) and hope it continues to support my devices with firmware updates.

I'm sure my QNAP NAS has Timecapsule functionality but I'm new to the NAS world and I don't think my onsite backup is something that I feel comfortable trusting with it yet.

optimuspaul 17 hours ago 2 replies      
I hope they still make an airplay device that can take over for my Airport express.
sigzero 17 hours ago 2 replies      
What are some good alternatives? I have the last "tower" version.
mixmastamyk 8 hours ago 0 replies      
Unhappy, settled on the airport express a while ago as a nice little box that was secure and kept out of the way. No faith in other companies to deliver a streamlined experience without vulnerabilities.
rabboRubble 9 hours ago 0 replies      
Much late to the convo...

Although my old Apple router is still chugging along perfectly well, I'm bummed about this change. First router I have ever felt 100% comfortable with the configuration and operation.

lowken10 15 hours ago 0 replies      
Sometimes people make the mistake thinking that people want the perfect solution when often good enough is fine. One great example is .mp3 vs lossless audio compression. The average listener (myself included) don't care about about perfect audio. We are perfectly happy with good enough.

Wireless networking falls into this category. Yes a wired home network would be faster by for the vast majority tasks wireless is fine.

themagician 7 hours ago 0 replies      
They may just throw a router into the next AppleTV. AppleTV is already positioned as the "center of your home" for HomeKit-enabled devices.
tedmiston 10 hours ago 0 replies      
This seems consistent with their decision to get out of displays.
zanybear 13 hours ago 0 replies      
I don't understand why this cannot be a differentiation for a product competing with Amazon Echo and Google Home, certainly you can install one in each room and will provide better networking and see your presence and respond in context.
TYPE_FASTER 10 hours ago 0 replies      
They are probably looking at competitors entering the space. Starry, Google, and more are entering the $200 "we'll take care of that for you" market.
fiatpandas 8 hours ago 0 replies      
I have a feeling that we'll see new wifi-emitting hubs from Apple in the near-term. They just won't be what we conventionally think of as routers, and will feature a host of new features to support a new ecosystem of Apple products for the connected home. e.g. thermostats. The new centralized hubs will be different enough in concept from airports to support abandoning the airport line completely.
olssonm 17 hours ago 1 reply      
What's next, the iMac? =(
Randgalt 13 hours ago 0 replies      
Damn - I'm speechless. What the heck is going on in Cupertino? The Airport Extreme is THE best wireless router you can buy. Also, it has the integrated Time Capsule which is incredible
XorNot 17 hours ago 0 replies      
I've never really liked AirPorts, but they were featurful. Honestly though, if Ubiquity would slap a Web UI on their stuff, they'd take-over the market.
whywhywhywhy 17 hours ago 0 replies      
Worrying that they outlived displays and Mac Pros
redial 18 hours ago 3 replies      
This is the way the world (of Apple) ends, not with a bang but with a whimper. One for each unit that turns off the lights...
KiDD 12 hours ago 0 replies      
This is wild speculation...

Why would Apple need to upgrade the AirPort Extreme anyways?

dictum 17 hours ago 2 replies      
Wireless routers don't sell like hot cakes, but this was never the point of Apple making them.

Just try to imagine Steve Jobs setting up a generic router.

* * *

Now imagine him searching for a wireless router and seeing some Google product in the top rankings.

(EDIT: I tried to play with the "Steve wouldn't allow this" trope and failed removing it to reduce the noise)

kirkdouglas 17 hours ago 2 replies      
Do they still sell Time Capsule?
peterwwillis 8 hours ago 0 replies      
Dear Apple users,

There are still wifi routers that you can use. Which one you choose makes almost no difference whatsoever. Just like with Apple, you can go out and buy a random wifi router, take it out of the box and plug it in, and it will just work. Stop freaking out.

Love, the non-Apple universe.

douche 17 hours ago 0 replies      
Interesting to see all the love for AirPorts here. In my experience, they've been pretty garbage, and it was a great relief when we finally ditched them and bought some TP-Link equipment instead, and now we have far less trouble with the office wifi.
mozumder 17 hours ago 1 reply      
I'm surprised Apple could never figure out a way to make a profitable wi-fi router. Every household needs one, so what prevented them from differentiating their products from generic routers? It really could have been a hugely profitable product.

I would have gone with a server approach to a wi-fi router, one that does everything in MacOS Server - email, VPN, web, etc..

True Link (YC S13) is launching an investment division medium.com
34 points by howsilly  11 hours ago   6 comments top 4
n00b101 9 hours ago 1 reply      
The glide path shown here [1] seems to advise being ~90% in cash at age 61 and then ramping up risk until you're ~90% in equities by age 86. What's the logic behind that?

[1] https://cdn-images-1.medium.com/max/600/1*TnBmuHLdCi1VGwBMS7...

ikeboy 4 hours ago 0 replies      
I've long been interested in an investment company that had a variable cut that was directly based on the customers' preferences. Ultimately you'd want the managers incentives based on the fees to be exactly aligned with the customers' needs, which means that different fee structures are needed for different kinds of customers.

Basically the hedge fund fee structure extended. So someone who wanted more risk would give a higher cut of fees for higher gains and a lower cut of fees for lower gains, and vice versa. The average expected value to the manager should be a constant percentage of assets, but the distribution changes for each customer.

One cool thing that naturally falls out of this idea is negative fees: if someone is risk averse enough, then the incentives require the manager to lose money if the customer loses, which causes the manager to be risk averse as well for those funds.

(I have more detailed thoughts on this that this margin is too small to contain; feel free to email me for some disorganized elaboration on the above.)

qwrusz 6 hours ago 0 replies      
There's a big challenge for RIAs trying to do right by their clients and offer appropriate glide path investment portfolios:

The products and potential returns that honest legit RIAs discuss with potential clients will always be unappealing compared what competitor, dishonest RIAs (who are willing to exaggerate) will be offering.

One lesson from the election: it's hard to convince people of a reality they don't want to hear and warn them of others who are promising wildly optimistic scenarios are not being totally honest with them. Potential investors want to believe exaggerated talk of huge returns by dishonest RIAS and honest RIAs lose clients because of this.

tptacek 9 hours ago 0 replies      
How is this better than a Vanguard Target Retirement fund?
       cached 22 November 2016 08:02:02 GMT