hacker news with inline top comments    .. more ..    23 Aug 2017 Best
home   ask   best   2 years ago   
Android Oreo android.com
746 points by axg  1 day ago   605 comments top 5
dcomp 1 day ago 7 replies      
The most interesting part is the way they are planning on tackling fragmentation in O onwards with Project Treble [0]

If your device ships with O it should be running an immutable semantically versioned HAL. In essence you should be able to be able to flash AOSP on every new device. No matter what the vendor does.

Edit: I can see it now, in the technical specs of each device you will see a list of HAL Versions. The newer your HAL the longer you can expect support from AOSP if not your vendor.

[0] http://androidbackstage.blogspot.co.uk/2017/08/episode-75-pr...

klondike_ 1 day ago 9 replies      
Project Treble is the most important thing in this release

>The biggest change to the foundations of Android to date: a modular architecture that makes it easier and faster for hardware makers to deliver Android updates.

With any luck, this will end the huge security/update problem Android has. Right now an update is dependent on the chip manufacturer's drivers, then the OEM adding them to the ROM with their custom "improvements", and finally the carrier pushing the update to devices. Right now it just takes one break in the link and a device goes without updates, which is a security disaster. If Google can push updates from the Play Store (presumably the end goal of Treble), none of this will be a problem.

rdsubhas 1 day ago 8 replies      
Not saying that everything else is bad, but one thing that strikes me is how much they have run out of interesting things now that they had to use fillers[1] like:


Support for tooltips (small popup windows with descriptive text) for views and menu items.

Normally, this would be relegated to a git changelog in the support library. But this is on the global marketing landing page.

I like to imagine a fictional internal mail thread going like this:

> Folks! please, give us something, anything, to put on the landing page!

> Someone replies duh, maybe tooltips

> What's a tooltip?

> uhh, small popup windows with descriptive text

> What's a popup window?

> uhh...

> Nevermind, its on!

Obligatory /s and yeah its Google, but seriously I can't imagine any other circumstances on how this specific copy, which tries to explain what a "tooltip" is by using the words "popup window", "view" and "menu item", came up.

This could be a good sign though, of the maturity of the platform (and harder to feel left out if you didn't upgrade).

1: https://www.android.com/versions/oreo-8-0/

dcow 1 day ago 6 replies      
Am I the only one who's really disappointed by the platform's shift in its stance on background execution? I was originally drawn to Android because it wasn't iOS. I wanted to develop on a platform where I could run a service in the background if the user wanted that. Apps that were bad stewards of battery life and phone resources were supposed to be scrutinized by users and removed if they were too poor. You can be a good steward, it's just harder especially when your monolithic app is an uninformed port of some legacy iOS code.

By issuing a hard restriction on background usage Google has brilliantly improved battery life for the masses while condonig the same lazy architectural patterns of the past, locked people into Firebase Cloud Messaging--a Google service not part of the AOSP, and potentially stunted Android adoption in domains outside of mobile. It's the turning of an era for Android, and my interests have moved elsewhere (from an app platform perspective, embedded Android is still vialble since everything you ship runs as a system app with no restrictions).

amrrs 1 day ago 7 replies      
Has Google ever released a report on how much time it takes an average flagship device to get the latest android version? Even if you've paid $$$$ in getting Samsung Galaxy S8 just recently, you're not going to get Android O tomorrow morning. But that's definitely the case with iOS. That makes a huge difference in the world where Software updates play a huge role in performance and functionality than hardware update (read. Image processing vs 13Mp to 16Mp camera) Google hasn't been successful in that.
Let Consumers Sue Companies nytimes.com
521 points by jseliger  11 hours ago   248 comments top 26
flexie 11 hours ago 3 replies      
In the EU you cannot bind consumers by such arbitration clauses:http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A3...

Consumers can usually sue corporations at a court in their own jurisdiction. Many European countries also allow class action law suits. Yet, we have few law suits against corporations. There are other reasons for this:

- consumers are not awarded punitive damages,

- court fees are higher (usually a percentage of what you ask for),

- if the consumers lose they pay not only their own lawyer, but (to an extent decided by the court) also the lawyer representing the corporation,

- many European countries have consumer "watchdogs" / ombudsmen, i.e. public entities that have the authority to start cases against corporations,

- many European countries have a variety of consumer complaint boards that handle small claims efficiently and at low cost.

Few who know consumer matters in both the US and the EU would trade the European system for the American.

rayiner 11 hours ago 4 replies      
Class actions can be effective where the class members are relatively large, sophisticated entities. E.g. the data-breach class action brought by banks that is mentioned in the article. But in the consumer-protection space, we should consider alternatives. Where the class members are individual consumers, litigation ends up being lawyer-driven. Cases settle for pennies on the dollar of potential damages, and end up serving neither to compensate consumers nor really to deter illegal conduct.

Notably, in the EU, the tendency is to have more "ask for permission before doing something" regulation, and less "ask for forgiveness after doing something wrong" litigation. E.g. unlike in the U.S., there are laws setting forth detailed safety requirements for consumer products, and agencies responsible for enforcing those requirements. I suspect that approach yields the desired level of product safety at lower cost than the American approach. Similar approaches could, of course, be applied to consumer financial products.

hedora 11 hours ago 4 replies      
I never understood why binding arbitration was legal for non-negotiated contracts.

Also, by reading this you agree all disputes between us will go through an arbitration firm of my choosing.

wimgz 11 hours ago 6 replies      
only a lunatic or a fanatic sues for $30.

A bit off topic here but this is IMHO a great challenge for AI: making a lawyer affordable for the masses when they are bullied by banks, airlines, etc.

If it costs you $5 , why not sue for $30 ?

mxfh 11 hours ago 2 replies      
Just look at how the EU consumer protection directives are working over here. You're simply not allowed to waive your guaranteed rights as a customer in some sort of EULA or TOS. And if you are forced to, the whole contract is void in it's entirety and you're free to walk away from it.
s73ver_ 9 hours ago 0 replies      
There is no valid reason why a company should require someone to sign away their rights, and it absolutely should not be allowed. Otherwise individuals might as well not have those rights at all.
clavalle 10 hours ago 1 reply      
Capture of the machine of the justice system by the wealthy is one of the most impactful and persistent market distortions in human history.

If people cannot bring the power of government to enforce appropriate costs against players with more market power government of the People, by the People, and for the People has failed.

coolaliasbro 10 hours ago 0 replies      
"First, opponents claim that plaintiffs are better served by acting individually than by joining a group lawsuit."

Then why should it matter if plaintiffs want to act collectively (ie, put themselves at a disadvantage according to the quote above)--wouldn't that benefit the opponents?

leoharsha2 11 hours ago 1 reply      
I sued a company where I worked. They were not paying my final dues after I left the company. It took me 1 year and lots of visits to court just to get my final dues. Finally after a year of all such waste of time, they came to me and asked for a settlement to which I agree and I got part of my money.

In India, even if you know you will win the case, it is not worth it. It will take your mental peace. Although I highly recommend to sue companies in other countries, In India we should think twice before suing anyone.

rrggrr 10 hours ago 1 reply      
FACT: products liability law reduced accidental deaths in the workplace. (source: https://www.cdc.gov/mmwr/preview/mmwrhtml/mm4822a1.htm)

There is no reason similar laws wouldn't work for consumers privacy. Tort law needs to be applied aggressively to data privacy.

jseliger 11 hours ago 0 replies      
This is a great idea. Contracts of adhesion that dominate our lives should not automatically and totally be stacked in favor of companies.
Pica_soO 3 hours ago 1 reply      
I wish there was a way to preemptive lawsuit a industry, before it even undertakes a operation. Basically, a group of people bets with lawsuits on the damages done to society by a industrial operation, forcing anyone to endeavor such operations to build a huge deposit of settlement and legalfees to cover the threat - on for example cutting down "rainforrest". As state actors have proven lousy wardens on these goods of society, maybe the interest of private stakeholders might put such a bounty on the head of damaging activity's, that these cease or be replaced with less "dangerous" replacement endeavors, previously not viable in a market economy that rewards distributed short term damages to everyone.
redm 11 hours ago 4 replies      
I don't disagree with the sentiment of the article given the examples provided, i.e. Wells Fargo. That said, given the climate for frivolous lawsuits brought by "shakedown" attorneys, it opens the flood gates for something far worse.

Maybe a better compromise is to allow for binding arbitration UNLESS the company is found guilty of fraud or other illegal activity, such as Wells Fargo.

Alternatively, perhaps tort reform to prevent frivolous lawsuits would remove the need for arbitration.

ct520 9 hours ago 0 replies      
There's this one cool site to query outcomes of arbitration. No bueno.


DannyBee 11 hours ago 1 reply      
I'd love to understand what the end hope is.Very few class action lawsuits have resulted in any sort of permanent change.Also note the whole reason for class action lawsuits was efficient justice, and class action lawsuits are a fairly recent creation, so that's not entirely surprising.However the lawsuits that tend to change things tend to be "government vs".

Maybe it gives consumers a good feeling to be able to sue everyone, but is it actually helping anything?

Even in the past, people were not able to sue the telecoms or banks into having good customer service, or into not doing illegal things. Rarely, if ever, have they recouped the profits these companies made doing whatever.Instead, all the companies just treat it as "cost of business".I'm not sure it's really been a vehicle for effective change anymore.

Certainly arbitration won't be either, but maybe groups of super annoyed people may have better luck forcing the government into action than people placated by class actions where the government can wash its hands say "well, they already took care of it!"

socrates1998 10 hours ago 0 replies      
I agree, these arbitration clauses are just another way corporations get away with screwing over consumers.
Animats 8 hours ago 1 reply      
In 2010, the Consumer Financial Protection Bureau, which I direct, was authorized to study mandatory arbitration and write rules consistent with the study. After five years of work...

Talk about a schedule overrun. That project should have been finished in 2011.

richardknop 11 hours ago 1 reply      
This isn't already possible? That's not right that companies can hack the law like this. Consumers should be able to sue.
Khol 10 hours ago 0 replies      
The oddest takeaway from this for me is that these arbitration clauses are banned for contracts for members of the military.
exabrial 9 hours ago 0 replies      
Call me cynical, but it seems most class action lawsuits end up paying $100m to a lawyer group and $5 to each individual :/
tareqak 2 hours ago 0 replies      
A slightly related issue that I find somewhat unsettling is how easily big companies that are being sued by state or federal governments are able to settle in out of court proceedings.

From my understanding, the argument for allowing settlements is to help one or both parties somehow save face, time, or money. However, what everyone (including people who are not affiliated with either party) loses out on is the possibility to establish a precedent. Some of the cases that come to mind are the HSBC case where some people who were following the court proceedings started to sense and publicly state the real possibility of actual criminal wrongdoing (https://www.wsj.com/articles/hsbc-agrees-470m-settlement-ove...). Compare that to a guilty plea by BNP Paribas here (https://www.theguardian.com/business/2014/jun/30/bnp-paribas... and https://www.justice.gov/opa/pr/bnp-paribas-sentenced-conspir...).

In comparison, see what happens to an individual (https://www.cnbc.com/2014/03/19/soc-gen-rogue-trader-kerviel...), and how people try to avoid it (https://www.bloomberg.com/news/articles/2017-06-09/ex-socgen...).

TL;DR: Big companies get to settle all to easily in cases brought forward by government attorneys without admitting guilt or wrongdoing, which is in contrast individuals (even wealthy ones) who often basically become the targets of what effectively becomes (sometimes faux) moral crusade. Companies end up:

1. Doing something that breaks the law, but in a sufficiently obfuscated manner.2. Getting caught by government usually via the observation and study by keen citizens and/or outright whistleblowers. 3. Involving their legal team to both prepare for fighting the case, and to give PR guidance and statements. Marketing and salespeople downplay the concerns from both consumers and customers. 4. When the case becomes sufficiently uncomfortable, settle it with government for some fraction of the total estimated damages and not admit to wrongdoing.5. Write off the settlement amount in the most tax-friendly manner possible via in-house or third-party accountants.6. Get praise internally and externally for being scrappy and disrupting the establishment/government/noun-used-as-a-pejorative (yes, you too dear HN contributor/lurker).7. Rinse and repeat 1-6.

I think a few of the things that Uber and AirBnB have had to settle essentially map to the steps I outlined above.

Note:a. The sources above that I cite might not be the best ones to support my claim, but the general issue and the unwillingness to resolve it properly remain.b. I realize that step 6 is mean/snarky/typecasting. My aim was to remind the reader that audible/silent praise/rebuke do have power, and that doing nothing does register as a signal in some situations.

amelius 11 hours ago 1 reply      
I was under the impression that class-action lawsuits were already a possibility for consumers.
necessity 10 hours ago 1 reply      
Don't do business with companies that have such contracts? If there is no choice in a given sector then the issue is a monopoly not the contracts.
samstave 9 hours ago 0 replies      
Probably not the perfect topic for the following question, but close enough for me to ask;

What is the deal with class action lawsuits WRT the monies lawyers get vs the plaintiffs.

I was a victim of fraudulent banking practices in 2008/2009 which resulted in the illegal foreclosure of my home in San Jose. I "won" my class action lawsuit and was "awarded"$1,100 for my victory on having a $489,000 house stolen from me by the bank. (Never missed a payment, never late, had credit score of 780 - this experience ruined me)

So I "won" that legal battle - but was unable to have my credit score fixed through the win...

So the question is: class-action lawsuit victories look to me to be a complete sham, so why would we value them as anything other than a "fuck you for not having enough money" enterprise - and where do the lawyers get off on their "right" to profit off such actions at the expense of others?

pfarnsworth 11 hours ago 7 replies      
Letting consumers continue to sue is not the answer. It only incentivizes more litigiousness, which is part of the things that are ruining American society.

What we need instead is for the government to pursue these crimes on our behalf. The fines should be draconian with no options for them to dilute it the way the SEC does. And the fines should go towards some fund that specializes in charities instead of going to government coffers so that we don't incentive behavior like civil forfeiture.

Why PS4 downloads are so slow snellman.net
706 points by kryptiskt  3 days ago   191 comments top 23
ploxiln 3 days ago 4 replies      
Reminds me of how Windows Vista's "Multimedia Class Scheduler Service" would put a low cap on network throughput if any sound was playing:


Mark Russinovich justified it by explaining that the network interrupt routine was just too expensive to be able to guarantee no glitches in media playback, so it was limited to 10 packets per millisecond when any media was playing:


but obviously this is a pretty crappy one-size-fits-all prioritization scheme for something marketed as a most-sophisticated best-ever OS at the time:


Many people had perfectly consistent mp3 playback when copying files over the network 10 times as fast in other OSes (including Win XP!)

Often a company will have a "sophisticated best-ever algorithm" and then put in a hacky lazy work-around for some problem, and obviously don't tell anyone about it. Sometimes the simpler less-sophisticated solution just works better in practice.

andrewstuart 3 days ago 4 replies      
Its bizarre because I bought something from the PlayStation store on my PS4 and it took DAYS to download.

The strange part of the story is that it took so long to download that the next day I went and bought the game (Battlefield 4) from the shop and brought it back home and installed it and started playing it, all whilst the original purchase from the PlayStation store was still downloading.

I ask Sony if they would refund the game that I bought from the PlayStation store given that I had gone and bought it elsewhere from a physical store during the download and they said "no".

So I never want to buy from the PlayStation store again.

Why would Sony not care about this above just about everything else?

erikrothoff 3 days ago 2 replies      
Totally unrelated but: Dang it must be awesome to have a service that people dissect at this level. This analysis is more in depth and knowledgable than anything I've ever seen while employed at large companies, where people are literally paid to spend time on the product.
g09980 3 days ago 5 replies      
Want to see something like this for (Apple's) App Store. Downloads are fast, but the App Store experience itself is so, so slow. Takes maybe five seconds to load search results or reviews even on a wi-fi connection.
cdevs 2 days ago 1 reply      
As a developer people seemed surprised I don't have some massive gaming rig at home but there's something about it that feels like work. I don't want to sit up and be fully alert - I did that all day at work I want 30 mins to veg out on a console jumping between Netflix and some quick multiplayer game with less hackers glitchin out on the game. It seems impressive what PS4 attempts to accomplish while you're playing a game and yet try and download a 40gig game and some how tip toe in the background not screwing up the gaming experience. I couldn't imaging trying to deal with cranking up the speed here and there while keeping the game experience playable in a online game. Chrome is slow? Close you're 50 tabs, want faster PS4 downloads, close your games/apps. Got it.
ckorhonen 3 days ago 3 replies      
Interesting - definitely a problem I've encountered, though I had assumed the issues fell more on the CDN side of things.

Anecdotally, when I switched DNS servers to Google vs. my ISP, PS4 download speeds improved significantly (20 minutes vs. 20 hours to download a a typical game).

mbrd 3 days ago 0 replies      
This Reddit thread also has an interesting analysis of slow PS4 downloads: https://www.reddit.com/r/PS4/comments/522ttn/ps4_downloads_a...
lossolo 3 days ago 2 replies      
DNS based GEO load balancing/CDN's are wrong idea today. For example if you use DNS that has bad configuration or one that is not supplied by your ISP, then you could be routed to servers thousands km/miles from your location. Last time I've checked akamai used that flawed dns based system. What you want to use now is what for example cloudflare uses which is anycast IP. You just announce same IP class on multiple routers/locations and all traffic is routed to the nearest locations thanks to how BGP routing works.
Reedx 3 days ago 3 replies      
PS3 was even worse in my experience - PS4 was a big improvement, although still a lot slower than Xbox.

However, with both PS4 and Xbox One it's amazingly slow to browse the stores and much of the dashboard. Anyone else experience that? It's so bad I feel like it must just be me... I avoid it as much as possible and definitely decreases the number of games I buy.

jcastro 3 days ago 0 replies      
Lancache says it caches PS4 and XBox, anyone using this? https://github.com/multiplay/lancache

(I use steamcache/generic myself, but should probably move to caching my 2 consoles as well).

foobarbazetc 3 days ago 2 replies      
The CDN thing is an issue too.

Using a local DNS resolver instead of Google DNS helped my PS4 speeds.

The other "trick" if a download is getting slow is to run the in built "network test". This seems to reset all the windows back even if other things are running.

Companion 1 day ago 0 replies      
I actually dread downloading patches and whatnot from PSN for this reason. I have a 500 mbit connection, that works perfectly well on all my other devices, but my Ps4 Pro, well, it's incredibly fickle. There'll be days where download speeds are good, and then there'll be days where even downloading 200mb is a challenge. It's all wired, so it's not a wifi related problem. I went through different routers and even changed ISP's once and the problem still persisted, so I think I have ruled it out as being on my end. It seems to be some weird QoS feature of the PS4, or possibly the PSN not being up to scratch - I don't know. Stuff like closing all background apps, or changing DNS, they don't really seem to do anything for me. Sometimes pausing/unpausing does help, though..
Tloewald 3 days ago 0 replies      
It's not just four years into launch since the PS3 was at least as bad.
tgb 3 days ago 6 replies      
Sorry for the newbie question, but can someone explain why the round trip time is so important for transfer speeds? From the formula I'm guessing something like this happens: server sends DATA to client, client receives DATA then sends ACK to server, server receives ACK and then finally goes ahead and sends DATA2 to the client. But TCP numbers their packets and so I would expect them to continue sending new packets while waiting for ACKs of old packets, and my reading of Wikipedia agrees. So what causes the RTT dependence in the transfer rate?
sydney6 3 days ago 0 replies      
Is it possible that lacking TCP Timestamps in the Traffic from the CDN is causing the TCP Window Size Auto Scaling Mechanism to fail?

See this commit:


lokedhs 2 days ago 1 reply      
As one piece of information I offer my own experience with PSN downloads on the PS4.

I'm in Singapore and my normal download speed is around 250 Mb/s, sometimes getting closer to 300.

However, I sometimes download from the Swedish store as well, and those download speeds are always very slow. I don't think I've ever gone above one tenth of what I get with local downloads.

That said, bandwidth between Asia and Singapore are naturally more unpredictable, so I don't know if I can blame Sony here. My point is that PS4 downloads can be very fast, and the Singapore example is evidence of this fact.

tenryuu 3 days ago 1 reply      
I remember someone hacking at this issue a while ago. They blocked Sony Japan's server, of which the download was coming from. The Playstation the fetched the file from a more local server, of which the speed was considerable faster.

Really strange

jumpkickhit 3 days ago 0 replies      
I normally warm boot mine, saw the speed increase with nothing running before, so guess I was on the right track.

I hope this is addressed by Sony in the future, or at least let us select if a download is a high priority or not.

deafcalculus 2 days ago 0 replies      
Why doesn't PS4 use LEDBAT for background downloads? Wouldn't this address the latency problem without sacrificing download speeds? AFAIK, Macs do this at least for OS updates.
hgdsraj 3 days ago 1 reply      
What download speeds do you get? I usually average 8-10 MB/s
galonk 3 days ago 0 replies      
I always assumed the answer was "because Sony is a hardware company that has never understood the first thing about software."

Turns out I was right.

bitwize 3 days ago 1 reply      
This is so that there's plenty of bandwidth available for networked play.

The Switch firmware even states that it will halt downloads if a game attempts to connect to the network.

frik 3 days ago 3 replies      
PS4 and Switch have at least no peer-to-peer download.

Win10 and XboxOne have peer-to-peer download - who would want that, bad for users, wasting upload bandwidth and counts against your monthly internet consumption. https://www.reddit.com/r/xboxone/comments/3rhs4s/xbox_update...

Ideal OS: Rebooting the Desktop Operating System joshondesign.com
641 points by daureg  2 days ago   324 comments top 50
joshmarinacci 2 days ago 11 replies      
I'm the original author. I hadn't planned to publicize this yet. There are still some incomplete parts, broken links, and missing screenshots. But the Internet wants what it wants.

Just to clarify a few things.

I just joined Mozilla Devrel. None of this article has anything to do with Mozilla.

I know that none of the ideas in this article are new. I am a UX expert and have 25 years experience writing professional software. I personally used BeOS, Oberon, Plan 9, Amiga, and many others. I read research papers for fun. My whole point is that all of this has been done before, but not integrated into a nice coherent whole.

I know that a modern Linux can do most of these things with Wayland, custom window managers, DBus, search indexes, hard links, etc. My point is that the technology isn't that hard. What we need is to put all of these things into a nice coherent whole.

I know that creating a new mainstream desktop operating system is hopeless. I don't seriously propose doing this. However, I do think creating a working prototype on a single set of hardware (RPi3?) would be very useful. It would give us fertile playground to experiment with ideas that could be ported to mainstream OSes.

And thank you to the nearly 50 people who have signed up to the discussion list. What I most wanted out of this article was to find like minded people to discuss ideas with.

Thanks, Josh

Damogran6 2 days ago 4 replies      
So what he's saying is: REmove all these layers because they're bad, but add these OTHER layers because they're good.

Thats how you make another AmigaOS, or Be, I'm sure Atari still has a group of a dozen folks playing with it, too.

The OS's over the past 20 years haven't shown much advancement because the advancement is happening higher up the stack. You CAN'T throw out the OS and still have ARkit. A Big Bloated Mature Moore's Law needing OS is also stable, has hooks out the wazoo, AND A POPULATION USING IT.

4 guys coding in the dark on the bare metal just can't build an OS anymore, it won't have GPU access, it won't have a solid TCP/IP stack, it won't have good USB support, or caching, or a dependable file system.

All of these things take a ton of time, and people, and money, and support (if you don't have money, you need the volunteers)

Go build the next modern OS, I'll see you in a couple of years.

I don't WANT this to sound harsh, I'm just bitter that I saw a TON of awesome, fledgling, fresh Operating systems fall by the wayside...I used BeOS, I WANTED to use BeOS, I'da LOVED it if they'd won out over NeXT (another awesome operating system...at least that survived.)

At a certain level, perhaps what he wants is to leverage ChromeOS...it's 'lightweight'...but by the time it has all the tchotchkes, it'll be fat and bloated, too.

cs702 2 days ago 8 replies      
Yes, existing desktop applications and operating systems are hairballs with software layers built atop older software layers built atop even older software layers.

Yes, if you run the popular editor Atom on Linux, you're running an application built atop Electron, which incorporates an entire web browser with a Javascript runtime, so the application is using browser drawing APIs, which in turn delegate drawing to lower-level APIs, which interact with a window manager that in turn relies on X...

Yes, it's complexity atop complexity atop complexity all the way down.

But the solution is NOT to throw out a bunch of those old layers and replace them with new layers!!!

Quoting Joel Spolsky[1]:

"Theres a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming: Its harder to read code than to write it. ... The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and theyve been fixed. ... When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work."

[1] https://www.joelonsoftware.com/2000/04/06/things-you-should-...

jcelerier 2 days ago 2 replies      
> Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps? There is no technical reason why this shouldn't be possible.

that's absolutely possible on linux with i3wm for instance

> I'd like to pipe my Skype call to a video analysis service while I'm chatting, but I can't really run a video stream through awk or sed.

awk and sed, no, but there are many CLI tools that accept video streams through pipe. e.g. FFMPEG. You wouldn't open your video through a GUI text editor, so why would you through CLI text editors ?

> Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.

Sure they are, on linux: https://linux.die.net/man/1/wmctrl

Fifteen years ago people were already controlling their WM through dbus: http://wiki.compiz.org/Plugins/Dbus#Combined_with_xdotool

The thing is, no one really cares about this in practice.

spankalee 2 days ago 3 replies      
This sounds a lot like Fuchsia, which is all IPC-based, has a syncable object-store[1], a physically-based renderer[2], and the UI is organized into cards and stories[3] where a story is "a set of apps and/or modules that work together for the user to achieve a goal.", and can be clustered[4] and arranged in different ways[4].

[1]: https://fuchsia.googlesource.com/ledger/

[2]: https://fuchsia.googlesource.com/escher/

[3]: https://arstechnica.com/gadgets/2017/05/googles-fuchsia-smar...

[4]: https://fuchsia.googlesource.com/sysui/#important-armadillo-...

[5]: https://fuchsia.googlesource.com/mondrian/

alexandercrohde 2 days ago 6 replies      
I really don't understand the negativity here. I sense a very dismissive tone, but most of the complaints are implementation details, or that this has been tried before (so what?).

I think anybody who really thinks about it would have to agree modern OSes are a disgusting mess.

-- Why does an 8 core mac have moments that it is so busy I can't even click anything but only see a pinwheel? It's not the hardware. No app should have the capability, even if it tried, to slow down the OS/UI (without root access).

-- Yes, it should be a database design, with permissions.

-- Yes, by making it a database design, all applications get the ability to share their content (i.e. make files) in a performant searchable way.

-- Yes, permissions is a huge issue. If every app were confined to a single directory (docker-like) then backing up an app, deleting an app, terminating an app would be a million times easier. Our OSes will never be secure until they're rebuilt from the ground up.[Right now windows lets apps store garbage in the 'registry' and linux stores your apps data strewn throughout /var/etc, /var/log, /app/init, .... These should all be materialized views [i.e. sym-links])

-- Mac Finder is cancer. If the OS were modularizable it'd be trivial for me, a software engineer, to drop-in a replacement (like you can with car parts).

-- By having an event-driven architecture, this gives me exact tracking on when events happened. I'd like a full record of every time a certain file changes, if file changes can't happen without an event, and all events are indexed in the DB, then I have perfect auditability.

-- I could also assign permission events (throttle browser CPU to 20% max, pipe all audio from spotify to removeAds.exe, pipe all UI notifications from javaUpdater to /dev/null)

I understand the "Well who's gonna use it?" question, but it's circular reasoning. "Let's not get excited about this, because nobody will use it, because it won't catch on, because nobody got excited about it." If you get an industry giant behind it (Linus, Google, Carmack) you can absolutely reinvent a better wheel (e.g. GIT, chrome) and displace a huge marketshare in months.

noen 2 days ago 7 replies      
As a current developer, former 10 year UX designer, and developer before that, this kind of article irks me to no end.

He contradicts his core assertion (OS models are too complex and layered) with his first "new" feature.

Nearly everything on this manifesto has been done before, done well, and many of his gripes are completely possible in most modern OS's. The article just ignores all of the corner cases and conflicts and trade-offs.

Truly understanding the technology is required to develop useful and usable interfaces.

I've witnessed hundreds of times as designers hand off beautiful patterns and workflows that can't ever be implemented as designed. The devil is in the details.

One of the reasons Windows succeeded for so long is that it enabled people to do a common set of activities with minimal training and maximizing reuse of as few common patterns as possible.

Having worked in and on Visual Studio, it's a great example of what happens when you build an interface that allows the user to do anything, and the developer to add anything. Immensely powerful, but 95% of the functionality is rarely if ever used, training is difficult because of the breadth and pace of change, and discovery is impossible.

avaer 2 days ago 4 replies      
> I suspect the root cause is simply that building a successful operating system is hard.

It's hard but not that hard; tons of experimental OS-like objects have been made that meet these goals. Nobody uses them.

What's hard is getting everyone on board enough for critical inertia to drive the project. Otherwise it succumbs to the chicken-and-egg problem, and we continue to use what we have because it's "good enough" for what we're trying to do right now.

I suspect the next better OS will come out of some big company that has the clout and marketing to encourage adoption.

dcow 2 days ago 0 replies      
Android already tried things like a universal message bus and a module-based architecture and while nice it doesn't quite live up to the promise for two reasons:

1. Application devs aren't trained to architect new software. They will port old shitty software patterns from familiar systems because there's no time to sit down and rewrite photoshop for Android. It's sad but true.

2. People abuse the hell out of it. Give someone a nice thing and someone else will ruin it whether they're trying to or not. A universal message bus has security and performance implications. Maybe if Android was a desktop os not bound by limited resources it wouldn't have pulled out all the useful intents and neutered services, but then again the author's point is we should remove these complex layers and clearly the having them was too complex/powerful/hungry for android.

I do think there's a point to be made that we're very mouse and keyboard centric at the primitive IO level and in UI design. I always wondered what the "command line" would look like if it was more complex than 128 ascii characters in a 1 dimensional array. But it probably wouldn't be as intuitive for humans to interface with unless you could speak and gesture to it as the author suggests.

nwah1 2 days ago 2 replies      
I agree with a lot of the critics in the comments, but I will say that the author has brought to my attention a number of features that I'm now kind of upset that I don't have.

I always thought LED keyboards were stupid because they are useless, but if they could map to hotkeys in video players and such, that could be very useful, assuming you can turn off the LEDs.

His idea for centralized application configs and keybindings isn't bad if we could standardize using something like TOML . The Options Framework for Wordpress plugins is an example of this kind of thing, and it does help. It won't be possible to get all the semantics agreed upon, of course, but maybe 80% is enough.

Resurrecting WinFS isn't so important, and I feel like there'd be no way to get everyone to agree on a single database unless every app were developed by one team. I actually prefer heterogeneity in the software ecosystem, to promote competition. We mainly need proper journalling filesystems with all the modern features. I liked the vision of Lennart Poettering in his blog post about stateless systems.

The structured command line linked to a unified message bus, allowing for simple task automation sounds really neat, but has a similar problem as WinFS. But I don't object to either, if you can pull it off.

Having a homogenous base system with generic apps that all work in this way, with custom apps built by other teams is probably the compromise solution and the way things have trended anyways. As long as the base system doesn't force the semantics on the developers, it is fine.

diegof79 2 days ago 1 reply      
What the author wants is something like Squeak. The idea behind Smalltalk wasn't to do a programming language, but a realization of the DynaBook (google for the essay "History Behind Smalltalk").

While I agree with the author that more innovation is needed on the desktop; I think that the essay is very disinformed.

For example, Squeak can be seen as an OS with very few layers: everything is an object, and sys calls are primitives. As user you can play with all the layers, and re-arrange the UI as you want.

So why the idea didn't took off? I don't know exactly (but I have my hypothesis). There are many factors to balance, those many factors are the ones that makes design hard.

One of those factors is that people tend to put the wrong priorities of where innovation should be. A good example is what the author mentions as priorities for him. None of the items mentions fundamental problems that computer users face today (from my perspective of course).

antoineMoPa 2 days ago 2 replies      
I appreciate the article for its coverage of many OS (including BeOS, wow, I should try that). What about package management though? Package management really defines the way you live under your flavor of linux, and there is a lot of room for improvement in current package managers (like decentralizing them, for example).


> I know I said we would get rid of the commandline before, but I take that back. I really like the commandline as an interface sometimes, it's the pure text nature that bothers me. Instead of chaining CLI apps together with text streams we need something richer [...]

I can't agree with that, it is the plain text nature of the command line that makes it so useful and simple once you know a basic set of commands (ls,cd,find,sed,grep + whatever your specific task needs). Plain text is easy to understand and manipulate to perform whatever task you need to do. The moment you learn to chain commands and save them to a script for future use, the sky is the limit. I do agree with using voice to chain commands, but I would not complain about the plain text nature and try to bring buttons or other forms of unneeded complexity to command-line.

ghinda 2 days ago 1 reply      
You have most of these, or at least very similar versions, in Plasma/KDE today:

> Document Database

This is what Akonadi was when when it came out for 4.x. Nepomuk was the semantic search framework so you could rate/tag/comments on files and search by them. They had some performance problems and were not very well received.

Nepomuk has been superseded by Baloo, so you can still tag/rate/comment files now.

Most KDE apps also use KIO slaves:https://www.maketecheasier.com/quick-easy-guide-to-kde-kio-s...

> System Side Semantic Keybindings

> Windows

Plasma 4 used to have compositor-powered tabs for any apps. Can't say if it will be coming back to Plasma 5.Automatic app-specific colors (and other rules) are possible now.

> Smart copy and paste

The clipboard plasmoid in the system tray has multiple items, automatic actions for what to do with different types of content and can be pinned, to remain visible.

> Working Sets

These are very similar to how Activities work. Don't seem to be very popular.

lake99 2 days ago 1 reply      
> Traditional filesystems are hierarchical, slow to search, and don't natively store all of the metadata we need

I don't know what he means by "traditional", but Linux native filesystems can store all the metadata you'd want.

> Why can't I have a file in two places at once on my filesystem?

POSIX compatible filesystems have supported that for a long time already.

It seems to me that all the things he wants are achievable through Plan9 with its existing API. The only thing missing is the ton of elbow grease to build such apps.

chrisleader 2 days ago 0 replies      
"First of all, its quite common, especially in enterprise technology, for something to propose a new way to solve an existing problem. It cant be used to solve the problem in the old way, so it doesnt work, and proposes a new way, and so no-one will want that. This is how generational shifts work - first you try to force the new tool to fit the old workflow, and then the new tool creates a new workflow. Both parts are painful and full of denial, but the new model is ultimately much better than the old. The example I often give here is of a VP of Something or Other in a big company who every month downloads data from an internal system into a CSV, imports that into Excel and makes charts, pastes the charts into PowerPoint and makes slides and bullets, and then emails the PPT to 20 people. Tell this person that they could switch to Google Docs and theyll laugh at you; tell them that they could do it on an iPad and theyll fall off their chair laughing. But really, that monthly PowerPoint status report should be a live SaaS dashboard thats always up-to-date, machine learning should trigger alerts for any unexpected and important changes, and the 10 meg email should be a Slack channel. Now ask them again if they want an iPad." - Benedict Evans
jmull 2 days ago 1 reply      
This isn't worth reading.

(It's painfully naive, poorly reasoned, has inaccurate facts, is largely incoherent, etc. Even bad articles can serve as a nice prompt for discussion, but I don't even think this is even good for that. I don't we'd ever get past arguing about what it is most wrong about.)

hackermailman 2 days ago 0 replies      
This guy wants GuixSD for 60% his feature requests, like isolated apps, version control, snapshots, ease of configuration, and ability to abstract all of it away, and Hurd for his multi-threaded ambitions, modularity, ability to do things like mount a database in a home directory to use as a fileserver, and message passing. This is slowly happening already https://fosdem.org/2017/schedule/event/guixhurd/

Then he wants to completely redesign a GUI to manage it all, which sounds a lot like Firefox OS with aware desktop apps, but with the added bonus that most things that req privileges on desktop OSs no longer need them with Guix. Software drivers are implemented in user space as servers with GNU Hurd, so you can now access these things and all the functionality that comes with them, exactly what the author wants.

xolve 2 days ago 0 replies      
Not an ideal article for anything. Looks like written with limited research, that by the end of it I an hardly keep focus.

> Bloated stack.True, there are options which author hasn't discussed.

> A new filesystem and a new video encoding format.Apple created new FS and video format. These are far more fundamental changes to be glossed over as trivial in a single line.

> CMD.exe, the terminal program which essentially still lets you run DOS apps was only replaced in 2016. And the biggest new feature of the latest Windows 10 release? They added a Linux subsystem. More layers piled on top.Linux subsytem is a great feature of Windows. Ability to run bash on Windows natively, what's the author complaining about?

> but how about a system wide clipboard that holds more than one item at a time? That hasn't changed since the 80s!Heard of Klipper and similar app in KDE5/Plasma. Its been there for so long and keeps text, images and file paths in clipboard.

> Why can't I have a file in two places at once on my filesystem?Hard links and soft links??

> Filesystem tagsAre there!

What I feel about the article is: OSes have these capabilities since long, where are the killers applications written for these?

benkuykendall 2 days ago 0 replies      
The idea of system wide "document database" is really intriguing. I think the author identified a real pattern that could be addressed by such a change:

> In fact, many common applications are just text editors combined with data queries. Consider iTunes, Address Book, Calendar, Alarms, Messaging, Evernote, Todo list, Bookmarks, Browser History, Password Database, and Photo manager. All of these are backed by their own unique datastore. Such wasted effort, and a block to interoperability.

The ability to operate on my browser history or emails as a table would be awesome! And this solves so many issues about losing weird files when trying to back up.

However, I would worry a lot about schema design. Surely most apps would want custom fields in addition to whatever the OS designer decided constitutes an "email". This would throw interoperability out the window, and keeping it fast becomes a non-trivial DB design problem.

Anyone have more insights on the BeOS database or other attempts since?

(afterthought: like a lot of ideas in this post, this could be implemented in userspace on top of an existing OS)

mwcampbell 2 days ago 1 reply      
I'm glad the author thought about screen readers and other accessibility software. Yes, easy support for alternate input methods helps. But for screen readers in particular, the most important thing is a way to access a tree of objects representing the application's UI. Doing this efficiently over IPC is hard, at least with the existing infrastructure we have today.

Edit: I believe the state of the art in this area is the UI Automation API for Windows. In case the author is reading this thread, that would be a good place to continue your research.

dgreensp 2 days ago 2 replies      
I love it, especially using structured data instead of text for the CLI and pipes, and replacing the file system with a database.

Just to rant on file systems for a sec, I learned from working on the Meteor build tool that they are slow, flaky things.

For example, there's no way on any desktop operating system to read the file tree rooted at a directory and then subscribe to changes to that tree, such that the snapshot combined with the changes gives you an accurate updated snapshot. At best, an API like FSEvents on OS X will reliably (or 99% reliably) tell you when it's time to go and re-read the tree or part of the tree, subject to inefficiency and race conditions.

"Statting" 10,000 files that you just read a second ago should be fast, right? It'll just hit disk cache in RAM. Sometimes it is. Sometimes it isn't. You might end up waiting a second or two.

And don't get me started on Windows, where simply deleting or renaming a file, synchronously and atomically, are complex topics you could spend a couple hours reading up on so that you can avoid the common pitfalls.

Current file systems will make even less sense in the future, when non-volatile RAM is cheap enough to use in consumer devices, meaning that "disk" or flash has the same performance characteristics and addressability as RAM. Then we won't be able to say that persisting data to a disk is hard, so of course we need these hairy file system things.

Putting aside how my data is physically persisted inside my computer, it's easy to think of better base layers for applications to store, share, and sync data. A service like Dropbox or BackBlaze would be trivial to implement if not for the legacy cruft of file systems. There's no reason my spreadsheets can't be stored in something like a git repo, with real-time sync, provided by the OS, designed to store structured data.

IamCarbonMan 2 days ago 0 replies      
All of this is possible without throwing out any existing technology (at least for Linux and Windows; if Apple doesn't envision a use case for something it's very likely never going to exist on their platform). Linux compositors have the ability to manipulate the window however the hell they want, and while it's not as popular as it used to be, you can change the default shell on Windows and use any window manager you can program. A database filesystem is two parts: a database and a filesystem. Instead of throwing out the filesystem which works just fine, add a database which offers views into the filesystem. The author is really woe-is-me about how an audio player doesn't have a database of mp3s, but that's something that is done all the time. Why do we have to throw out the filesystem just to have database queries? And if it's because every app has to have their own database- no they don't. If you're going to rewrite all the apps anyways, then rewrite them to use the same database. Problem solved. The hardest concept to implement in this article would be the author's idea of modern GUIs, but it can certainly be done.

On top of this, the trade-off of creating an entirely new OS is enormous. Sure, you can make an OS with no apps because it's not compatible with anything that's been created before, and then you can add your own editor and your own web browser and whatever. And people who only need those things will love it. But if you need something that the OS developer didn't implement, you're screwed. You want to play a game? Sorry. You want to run the software that your school or business requires? Sorry. Seriously, don't throw out every damn thing ever made just to make a better suite of default apps.

jimmaswell 2 days ago 0 replies      
Patently false that Windows hasn't innovated, UX or otherwise. Start menu search, better driver containment/other bsod reduction, multi-monitor expanding task bar, taskbar button reordering, other Explorer improvements, lots of things.
microcolonel 2 days ago 0 replies      
> Why can't I have a file in two places at once on my filesystem?

You can! Use hardlinks.

> Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.

There are well established standards for controlling window managers from programs, what on earth are you talking about?

> Applications would do their drawing by requesting a graphics surface from the compositor. When they finish their drawing and are ready to update they just send a message saying: please repaint me. In practice we'd probably have a few types of surfaces for 2d and 3d graphics, and possibly raw framebuffers. The important thing is that at the end of the day it is the compositor which controls what ends up on the real screen, and when. If one app goes crazy the compositor can throttle it's repaints to ensure the rest of the system stays live.

Just like Wayland!

> All applications become small modules that communicate through the message bus for everything. Everything. No more file system access. No hardware access. Everything is a message.

Just like flatpak!

> Smart copy and paste

This is entirely feasible with the current infrastructure.

> Could we actually build this? I suspect not. No one has done it because, quite honestly, there is no money in it. And without money there simply aren't enough resources to build it.

Some of this is already built, and most of it is entirely feasible with existing systems. It's probably not even that much work.

Animats 2 days ago 0 replies      
If you want to study user interfaces, look at programs which solve a hard problem - 3D animation and design programs. Learn Inventor or Maya or Blender.

Autodesk Inventor and Blender are at opposite ends of the "use the keyboard" range. In Inventor, you can do almost everything with the mouse except enter numbers and filenames. Blender has a 10-page list of "hotkeys". It's worth looking at how Inventor does input. You can change point of view while in the middle of selecting something. This is essential when working on detailed objects.

vbezhenar 2 days ago 1 reply      
I think that the next reboot will be unifying RAM and Disk with tremendous amount of memory (terabytes) for apps and transparent offloading of huge video and audio files into cloud. You don't need filesystem or any persistence anymore, all your data structures are persistent. Use immutable stuff and you have unlimited Undo for the entire device life. Reboot doesn't make sense, all you need is to flush processor registers before turning off. This experience will require rewrite OS from ground up, but it would allow for completely new user experience.
snarfy 1 day ago 1 reply      
What we have today grew together organically over time like a city. To do what is described in the article is akin to demolishing the city and completely rebuilding it from scratch. But it's not just from scratch, it's replacing all of the infrastructure and tooling that went into building the parts of the city, like plumbing and electrical. A state of the art substation requires it's own infrastructure to build. It's akin to requiring a whole new compiler tool chain and software development system just to get started with rebooting the OS.

If this happens it's only going to happen with a top-down design from an industry giant. Android and Fuchsia are examples of how it might happen. Will it? It seems these days nobody cares as long as the browser renders quickly.

thibran 2 days ago 0 replies      
Interesting to read someone else ideas about that topic, which I though myself quite a lot about. The basic building block of a better desktop OS is IMHO and as the OP wrote a communication contract between capabilities and the glue (a.k.a apps). I don't think we would need that many capability-services to be able to build something useful (it doesn't even need to be efficient at first). For the start it might be enough to wrap existing tools and expose them and see if things work or not.

Maybe by starting to build command-line apps and see how good the idea works (cross-platform would be nice). I guess that the resulting system would have some similarities with RxJava, which allows to compose things together (get asynchronously A & B, then build C and send it to D if it contains not Foo).

If an app would talk to a data-service it would no longer have to know where the data is coming from or how it got there. This would allow to build a whole new kind of abstractions, e.g. data could be stored in the cloud and only downloaded to a local cache when frequently used, just to be later synced back to the cloud transparently (maybe even ahead of time because a local AI learned your usage patterns). I know that you can have such sync-things today, they are just complicated to setup, or cost a lot of money, or work only for specific things/applications, also they are often not accessible to normal users.

Knowing how to interact with the command-line gives advanced users superpowers. I think it is time to give those superpowers to normal users too. And no, learning how to use the command-line is not the way to go ;-)

A capability-services based OS could even come with a quite interesting monetization strategy by selling extra capabilities, like storage, async-computation or AI services, beside of selling applications.

zaro 2 days ago 2 replies      
> I suspect the root cause is simply that building a successful operating system is hard.

Well, it is hard, but this is not the main source of issues. The obstacle to having nice things on the desktop is this constant competition and wheel reinvention, the lack of cooperation.

The article shows out some very good points, but just think of this simple fact. It's 2017, and the ONLY filesystem that will seamlessly work with macOS, Windows and Linux at the same time is FAT, a files system which is almost 40 years old. And it is not because it is so hard to make such a filesystem. Not at all.Now this is at the core of reasons why we can't have nice things :)

Groxx 2 days ago 0 replies      
>Consider iTunes. iTunes stores the actual mp3 files on disk, but all metadata in a private database. Having two sources of truth causes endless problems. If you add a new song on disk you must manually tell iTunes to rescan it. If you want to make a program that works with the song database you have to reverse engineer iTunes DB format, and pray that Apple doesn't change it. All of these problems go away with a single system wide database.

Well. Then you get Spotlight (on OSX, at least) - system-wide file/metadata/content search.

It's great! It's also quite slow at times. Slow (and costly) to index, slow to query (initial / common / by-name searches are fast, but content searches can take a second or two to find anything - this would be unacceptable in many applications), etc.

I like databases, but building a single well-performing one for all usages is quite literally impossible. Forcing everyone into a single system doesn't tend to add up to a positive thing.

lou1306 2 days ago 4 replies      
Windows 10 didn't add any UX feature? What about Task View (Win+Tab) and virtual desktops?

And why bashing the Linux subsystem, which is surely not even developed by the UX team (so no waste of resources) and is a much needed feature for developers?

BTW, there is a really simple reason why mainstream OSs have a rather conservative design: the vast majority of people just doesn't care and may even get angry when you change the interaction flow. Many of the ideas exposed in the post are either developer-oriented or require significant training to be used proficiently.

nickpsecurity 2 days ago 3 replies      
The author keeps questioning why certain siloing like App Store happens. The author then offers technical solutions that won't work. The reason is the siloing is intentional on part of companies developing those applications to reduce competition to boost profits. They'd rather provide the feature you desire themselves or through an app they get 30% commission on.

A lot of other things author talks about keep the ecosystems going. The ecosystems, esp key apps, are why many people use these desktop OS's. Those apps and ecosystems take too much labor to clean slate. So, the new OS's tend not to have them at all or use knock-offs that don't work well enough. Users think they're useless and leave after the demo.

The market effects usually stomp technical criteria. That's why author's recommendations will fail as a whole. "Worse Really is Better" per Richard Gabriel.

Skunkleton 2 days ago 1 reply      
In 2017 a modern operating system such as Android, iOS, or Chrome (the browser) exists as a platform. Applications developed for these platforms _must_ conform to the application model set by the platform. There is no supported way to create applications that do not conform to the design of the platform. This is in stark contrast to the "1984" operating systems that the OP is complaining about.

It is very tempting to see all the complexity of an open system and wish it was more straight forward; more like a closed system. But this is a dangerous thing to advocate. If we all only had access to closed systems, who would we be seceding control to? Do we really want our desktop operating systems to be just another fundamentally closed off walled garden?

bastijn 2 days ago 1 reply      
Apart from discussing the content. Can I just express my absolute love for (longer) articles that start with a tl;dr?

It gives an immediate answer to "do I need to read this?", and if so, what key arguments should I pay attention to?

Let me finish with expressing my thanks to the author for including a tl;dr.


raintrees 1 day ago 0 replies      
I have been conceptualizing what it would take to abstract away the actual physical workstation into a back-end processing system and multiple UI modules physically scattered throughout my home (I work from home) and grounds.

For example, as in shift my workspace from my upstairs office to my downstairs work area just by signing in on the different console setup downstairs. All of my in-process work comes right back up. Right now I do this (kind of) using VMs, but they are limited when addressing hardware, and now I am multiplying that hardware.

Same thing with my streams - Switch my audio or video to the next room/zone where I want to move myself to. Start researching how to correctly adjust my weed whip's carburetor, then go out to the garage and pull up my console there where my work bench is and the dismantled tool.

Eventually my system would track my whereabouts, with the ability (optionally turned on) to automatically shift that IO to the closest hardware setup to me as I move around the structure/property.

And do something like this for each person? So my wife has her streams? Separate back end instance, same mobility to front-end UI hardware?

Can this new Desktop Operating System be designed with that hardware abstraction in mind?

jonahss 2 days ago 1 reply      
The author mentions they wished Object-based streams/terminals existed. This is the premise of Windows Powershell, which today reminds me of nearly abandoned malls found in the Midwest: full of dreams from a decade ago, but today an empty shell lacking true utility, open to the public for wandering around.
joshmarinacci 2 days ago 0 replies      
OP here. I wasn't quite ready to share this with the world yet, but what are you gonna do.

I'm happy to answer your questions.

gumby 1 day ago 0 replies      
I really agree that the hermetic siloization of applications and their data over the past 30 years has been a major step backwards. I also wish all apps were composable.

It seems to require a mental shift few developers are willing to adopt however. Good luck -- you are on the right track on many things (even if I can't imagine life without a command line).

mherrmann 2 days ago 1 reply      
What I hate is the _bloat_. Why is GarageBand forced upon me with macOS? Or iTunes? Similarly for video players etc on all the other OSs. I am perfectly capable of installing the software I need, thank you very much.
ksec 2 days ago 3 replies      
I hate to say this, but an ideal Desktop OS, at least for majority of consumers is mostly here, and it is iOS 11.

Having use the newest iPad Pro 10.5 ( along with iOS 11 beta ), the first few hours were pure Joy, after that were frustration and anger flooding in. Because what I realize, is this tiny little tablet, costing only half a Macbook Pro or even iMac, limited by Fanless design with lower TDP, 4GB of memory, no Dedicated GPU, likely much slower SSD, provides a MUCH better user experience then the Mac or Windows PC i have ever used, that is including the latest Macbook Pro.

Everything is fast and buttery smooth, even the Web Browsing experience is better. The only downside is you are limited touch screen and Keyboard. I have number of times wonder If I can attach a separate monitor to use it like Samsung Desktop Dock.

There are far too many backward compatibility to care for with both Windows and Mac. And this is similar to the discussion in the previous Software off Rails. People are less likely to spend time optimizing when it is working good enough out of the box.

gshrikant 2 days ago 2 replies      
While I'm not sure I agree with everything in the article, it does mention a point I've been thinking about for a while - configuration.

I really do think applications should try to zero-in on a few standard configuration file formats - I really don't have a strong preference on one (although avoiding XML would be nice). It makes the system uniform and makes it easier to move between applications. Of course, applications can add extended sections to suit their need.

Another related point is the location of configuration files - standard Linux/Unix has a nice hierarchy /etc/ for and /usr/local/etc and others for user-specific configurations (I'm sure Windows and OS X should have a similar hierarchy too) but different applications still end up placing their configuration files in unintuitive places.

I find this lack of uniformity disturbing - especially because it looks so easy (at least on the surface) to fix and the benefits would be nice - easier to learn and scriptable.

A last unrelated point - I don't see why Linux distributions cannot standardize around a common repository - Debian and Ubuntu both share several packages but are yet forced to maintain separate package databases and you can't easily mix and match packages between them. This replication of effort seems more ideological than pragmatic (of course, there probably are some practical reasons too). But I can't see why we can't all pool resources and share a common 'universal' application repository - maybe divide it into 'Free', 'Non-Free', 'Contrib/AUR' like granular divisions so users have full freedom to choose the packages they want.

Like other things, I think these ideas have been implemented before but I'm a little disappointed these haven't made it into 'mainstream' OS userlands yet.

doggydogs94 2 days ago 0 replies      
FYI, most of the author's complaints about the command line were addressed by Microsoft in PowerShell. For example, PowerShell pipes objects, not text.
nebulous1 2 days ago 0 replies      
I much preferred the second half of this to the first half.

However, both seemed to end up with the same fundamental flaw: he's either underestimating or understating how absurdly difficult most of what he's suggesting is. It's all well and good saying that we can have a standardized system for email, with everything being passed over messages, but what about everything else? It's extremely difficult to standardize an opinionated system that works for everything, which is exactly why so many operating system constructs are more general than specific. For this to all hang together you would have to standardize everything, which will undoubtedly turn into an insane bureaucratic mess. Not to mention that a lot of software makers actively fight against having their internal formats open.

hyperfekt 2 days ago 0 replies      
This would be neat, but isn't radical enough yet IMHO. If everything on the system is composed of pure functions operating on data, we can supercharge the OS and make everything both possible AND very simple.The whole notion of 'application' is really kind of outmoded.
agumonkey 2 days ago 1 reply      
I see https://birdhouse.org/beos/refugee/trackerbase.gif for 2 seconds and I feel happy. So cute, clear, useful.
al2o3cr 2 days ago 0 replies      

 Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.
My copy of Divvy is confused by this statement. :)

casebash 2 days ago 0 replies      
I wouldn't say that innovation in Desktop is dead, but most of it seems to be driven by features or design patterns copied from mobile or tablet. Take for examples Windows 8 and Windows 10, Windows 8 was all about moving to an OS that could run on a whole host of devices, while Windows 10 was all about fixing up all the errors made in this transition.
mcny 1 day ago 0 replies      
Hi Josh,

Thank you for writing this.

Just noticed a small typo (I think)

> For a long time Atom couldn't open a file larger than 2 megabytes because scrolling would be to slow.

to should be too.


st3fan 2 days ago 1 reply      
> And if you wanted to modify your email client, or at least the one above (Mail.app, the default client for Mac), there is no clean way to extend it. There are no plugins. There is no extension API. This is the result of many layers of cruft and bloat.

I am going to say that it is probably a product decision in case of Mail.app.

Whether Mail.app is a big steaming pile of cruft and bloat inside - nobody knows. Since it is closed source.

Ellen Pao: My lawsuit failed. Others wont thecut.com
660 points by gkanai  1 day ago   541 comments top 6
dang 1 day ago 0 replies      
All: this article was flagged but we've turned the flags off because it contains significant new information. Threads about sexism have upticked in contentiousness latelyas has everything else, it seemsso would everyone please take care to follow these rules?

1. Please post civilly and substantively, or not at all;

2. If you have a substantive point to make, make it thoughtfully; otherwise please don't comment until you do.

Yes, there's redundancy there; we appear to need it.


strken 1 day ago 11 replies      
Articles like this really make me aware that men and women like Ellen Pao and her former partners live in a separate parallel world: three degrees, $10 million golden parachutes, private jet flights to ski resorts, affairs with a creepy married co-worker in Germany, machismo-driven muscling for VC connections, bisexual finance wizards who kickstart an Ivy-League LGBT program, dominance games over which chair an exec sits in, PR firms hired to smear uppity former partners... it's like a movie.

I worry that the media looks at cases like this as typical of the experience of women in tech, and downplays the impact of obvious and unobjectionable steps like "recruit junior devs from the ranks of biology grads" and "give expectant mothers maternity leave" because systemic changes aren't as interesting as diversity training or a VC partner's lawsuit.

Joeri 1 day ago 14 replies      
It's easy to get hung up on the particulars of Pao's story and get sidetracked into defending or judging her, but I feel that is besides the point. I am more interested in the wider notion of why she wrote this article: to point out that sexism in tech is a thing, and that it shouldn't be.

I'm wondering though: is this just about sexism, or is it about professionalism and maturity? Getting hit on by someone higher up the hierarchy than you can make it impossible to do your job, so that behavior is clearly unprofessional. But getting yelled at by your boss for shipping a bug is also unprofessional, and can also make it a toxic work environment. I'm not saying the two are the same, just that both are examples of unprofessional behavior that many places will tolerate.

Isn't it time we have conversations about what it means to be a professional in tech? Maybe other industries suffer less from these things because they have a longer history and have more guild-like working practices, where professional behavior is more clearly defined. In tech people get away with wildly unprofessional behavior as long as "they get stuff done", and personally I never felt that was acceptable.

Maybe this stuff is also sort of everywhere. Plenty of industries have toxic working relationships. Why isn't professionalism part of standard education tracks? I studied CS and I never learned about what it means to be a professional software developer. How do you have productive conversations with coworkers? How do you organize your work effectively? All of these things you're supposed to figure out on your own, but looking around I can tell that mostly people never do, or only do so after decades of getting it wrong.

ralusek 1 day ago 9 replies      
I think that it's really interesting that conversations about venture capital and conversations about engineering both get to be lumped into a more general conversation about sexism in Silicon Valley.

A lot of engineers have a good bit to say against the existence of widespread sexism in engineering, myself included. Engineering in computer science has long been represented by a nearly nonexistent barrier to entry outside of one's capabilities and their relevance to the position. Even traditional, and technically very relevant, lateral predictors to output are such as formal education, are largely ignored. Your accomplishments and capabilities interviewing are ultimately what get the hire, with very few exceptions. Anybody who has been in a hiring position can speak to the utilitarian pursuit of the placement; race and sex are the last thing on the mind come hiring time.

All of that being said, however, I don't find it remotely hard to believe that Ellen Pao's recounting of her experience in the world of venture capital is far from the truth. I'm actually relatively certain that she's spared us a good bit of the details. But this isn't engineering, this is finance; quite rarely about the utility of any particular individual in a role, and almost entirely centered around pretty horrible characteristics. Cronyism is the most important characteristic in the club, trading favors, trading connections, looking the other way, getting away with this, getting away with that. The whole thing is a zero sum game, because nobody within is creating any value, you're only ever vying for a piece of the pie baked by the outsiders who actually produce things. As Pao points out, any partner you have largely considers you as a mechanism by which they are to have less investment capital available themselves, and any senior sees you as a way to bubble up the greatest picks for them to skim off the top.

The point is that the business is not about merit, it's about being in the club and playing ball. To deviate from the standards of the club just means you're less of a sure thing when it comes to being a crony, and it doesn't take much to understand why a woman is an outsider in a club like this.

So when we talk about sexism in Silicon Valley, let's please not conflate these two very different businesses. One of them is made up of worker bees, and we don't care what kind of bee you are as long as you're outputting honey. The other one is literally Wall Street pretending like it's anything but.

dreta 1 day ago 8 replies      
An interesting read. Though, for me, VC in general is not the kind of job that favors people who are nice. When people are rude to you, or try to use you, there's a multitude of ways you can interpret that. For Ellen, it's sexism. What baffled me was that in the opening paragraphs she felt the need to point out that the powerful men were white. For me, it set the tone for the whole article, and painted a clear picture of her attitude towards the case and people involved. Given the current political climate in SV, it's a poor attempt at manipulation, and doesn't help her come off as reasonable.
Al-Khwarizmi 1 day ago 4 replies      
Sexism in Silicon Valley or sexism among the corporate elites?

I doubt the average tech worker, who doesn't travel in private jets, casually talks about porn actresses and sex workers at their job.

However, I have often heard and read such anecdotes about elite executives, also in non-tech sectors. I'd say that kind of attitude is more related to the impunity that comes with power than with tech or non-tech.

Examining a vintage RAM chip, I find a counterfeit with a different die inside righto.com
449 points by darwhy  1 day ago   134 comments top 20
NikolaNovak 1 day ago 3 replies      
Hah... Through the first few sentences I kept wondering which wondrous architecture are we talking about, that "64-bit" memory chip is considered "vintage"...?

It took me embarrassingly long to realize that it's not 64-bit bus, it's a 64-bit chip... holding an amazing 4x16bits=64bits of data total.

Just goes to show it's hard to be sure where your unspoken assumptions may lie.... :-)

pavel_lishin 1 day ago 6 replies      
> As for Robert Baruch's purchase of the chip, he contacted the eBay seller who gave him a refund. The seller explained that the chip must have been damaged in shipping!

I think at that point, you report them to Ebay for fraud, don't you? Or is that just spitting in the ocean?

todd8 1 day ago 1 reply      
This, along with the complaints in the comments here, is quite discouraging. I'm ready to give up on third-party sellers on Amazon, see https://news.ycombinator.com/item?id=14993216 [I Fell Victim to a $1,500 Used Camera Lens Scam on Amazon], and now Ebay looks like it's not going to be a viable alternative.
happycube 1 day ago 5 replies      
"The eBay seller gave him a refund. The seller explained that the chip must have been damaged in shipping! (Clearly you should pack your chips carefully so they don't turn into something else entirely.)" ;)
jk2323 1 day ago 4 replies      
"Why would someone go to the effort of creating counterfeit memory chips that couldn't possibly work? The 74LS189 is a fairly obscure part, so I wouldn't have expected counterfeiting it to be worth the effort. The chips sell for about a dollar on eBay, so there's not a huge profit opportunity. "

This sounds obscure. Small/Tiny mark up. Small market. High fake detection rate. I wonder if there is something about the story that we miss.

thmsths 1 day ago 5 replies      
I would be interested to know how the Pentagon deals with those 15% of counterfeit ICs, the implications are quite scary.
robryk 1 day ago 3 replies      
If one counterfeits a chip using something that will not work at all, why put any chip inside at all? Why not just place a resistor between VCC and GND?
brooklyntribe 1 day ago 0 replies      
From his posts, he's like the smartest person in the world. At least that's my impression.

Mine bitcoin with paper and pencil? Is anyone else in the world even thinking about something so far out?

windlessstorm 1 day ago 2 replies      
Thanks for this, was an awesome read. Any more such blogs for learning and getting into electronics and such low level stuffs?

PS. I am newbie software engineer (c/networking) and recently fascinated and drawn towards electronics.

jeffwass 19 hours ago 0 replies      
One of the first pics in that article comes from an earlier chip he previously reviewed - the Intel 3101. I'm proud to say my dad provided Ken with those two Intel 3101 chips.

Ken's review of the 3101 is here:http://www.righto.com/2017/07/inside-intels-first-product-31...

This is the first IC ever produced by Intel.

My dad had a few of these chips from an old computer. Some of the 3101 chips are from such early runs they don't even have the usual date stamps on the packages, and were outsourced by Intel into generic wirebonded IC packages.

jxramos 1 day ago 1 reply      
Very cool article. I found myself strangely hit with a wave of nostalgia when the piece came upon "DTMF: dialing a Touch-Tone phone"
kazinator 1 day ago 0 replies      
> Why would someone go to the effort of creating counterfeit memory chips that couldn't possibly work?

Because it's maybe a mistake?

Some of the people people working at the factory don't know a potato chip from a silicon chip?

True counterfeit chips use the correct die. It is stolen, but the knock-offs cut corners: you're getting something that is not quality controlled, or perhaps even a reject off the factory floor (that might just work in your use case so you won't notice).

Sometimes counterfeit chips use a different implementation, but of the right general spec. Well sort of:

https://news.ycombinator.com/item?id=14685671 ("Ti NE555 real vs fake: weekend die-shot ").

kumarvvr 1 day ago 5 replies      
So, the chip is fake, but how come such chips could work satisfactorily in their place in a PC??
agjacobson 1 day ago 0 replies      
Why do you think the 74LS189 was being counterfeited?It was the touch tone chip being counterfeited, and disguised as a 74LS189. The buyer knew the ruse.
kutkloon7 1 day ago 0 replies      
Ken Shirriff is amazing. His blog entries are really worth reading.
yuhong 1 day ago 0 replies      
This reminds me of the 1988 DRAM shortage.
jackblack8989 1 day ago 4 replies      
Any experts here care to tell how does one check for RAM quality? Does CPU-Z do it? (Writing from work, don't have admin perms to use)

Not talking about this particular case, but maybe case of a RAM not working in general.

CamperBob2 1 day ago 1 reply      
The motivation (for the use of an LFSR instead of a traditional counter) is a shift register takes up less space than a counter on the chip; if you don't need the counter to count in the normal order, this is a good tradeoff

That's kind of a profound observation, even though it's obvious once you think about it. It never occurred to me that a maximal-length shift register is actually a simpler, more efficient logic structure than either a carry-chain adder or a ring counter.

basicplus2 1 day ago 0 replies      
Now I know why that project I designed didn't work...
gesman 1 day ago 2 replies      
Chip inside of the computer with a little telephone hidden inside.

You don't mind if your computer will dial in to China sometime, do you?

How JavaScript works: inside the V8 engine sessionstack.com
456 points by zlatkov  1 day ago   106 comments top 10
Veedrac 1 day ago 12 replies      
> How to write optimized JavaScript

This is all sensible advice if you're interested in writing fast-enough code, but I do find there's a lack of material for people who want to write fast Javascript. Pretty much the only thing I've found is the post by Oz[1], though I really don't want to have to compile Chrome.

For an example, I have a method in Javascript that does a mix of (integer) arithmetic and typed array accesses; no object creation or other baggage. I want it to go faster, and with effort I managed to speed it up a factor of 5. One of the things that helped was inlining the {data, width, height} fields of an ImageData object; just moving them to locals dropped time by ~40%.

Yet after all this effort, mostly based on educated guesses since Chrome's tooling doesn't expose the underlying JIT code for analysis, the code is still suboptimal. There's a pair of `if`s that if I swap their order, that part of the code allocates. How do people deal with these issues? A large fraction of this code is still allocating, and I haven't a clue where or why.

Perhaps I'm asking too much from a language without value types (srsly tho, every fast language has value types), but what I want is clearly possible: ASM.js does it! I don't really want to handwrite that, though.

[1]: https://www.html5rocks.com/en/tutorials/performance/mystery/ (E: This link is actually written from a different perspective than the one I read, but the content is the same.)

fdw 1 day ago 0 replies      
If you're into V8 internals, I'd recommend watching these talks by Franzsika Hinkelmann, a V8 engineer at Google: https://www.youtube.com/watch?v=1kAkGWJZ6Zo , https://www.youtube.com/watch?v=B9igDWV5ZUg and https://www.youtube.com/watch?v=p-iiEDtpy6I&t=606s

She's also recently started blogging at https://medium.com/@fhinkel

cm2187 1 day ago 11 replies      
I was wondering, given that 90% of the javascript in browsers must be standard libraries (jquery, bootstrap & co) wouldn't it make sense for google to hash the source code for every published version of these libraries, compile those statically using full optimisation and ship the binaries as part of their updates to the browser, so that you only have to compile the idiosyncratic part of the code?
yanowitz 1 day ago 1 reply      
Interesting article--I'd love to see one just on GC.

I just downloaded the latest node.js sources and v8 still has a call to CollectAllAvailableGarbage in a loop of 2-7 passes. It does this if a much cheaper mark-and-sweep fails. Under production loads, that would occasionally happen. This led to pause-the-world-gc of 3600+ms with v8, which was terrible for our p99 latency.

The fix still feels weird -- we just commented out the fallback strategy and saw much tighter response time variance with no increased memory footprint (RSS).

I never submitted a patch though because although it was successful for our workload, I wasn't sure it was generally appropriate (exposed as a runtime flag) and I left the job before I could do a better job of running it all down.

btown 1 day ago 0 replies      
Is there a way to see what hidden class an object has? For instance, if an array of objects is parsed from json, were all the objects assigned the same hidden class? Alternately, can one obtain statistics about hidden class usage? Seems like this would be very helpful for real world apps, especially given the prevalence of data intensive Electron apps.

EDIT: https://www.npmjs.com/package/v8-natives haveSameMap seems to do exactly this!

dlbucci 1 day ago 3 replies      
> Also, try to avoid pre-allocating large arrays. Its better to grow as you go.

Is this really true? I've only heard the opposite (preallocate arrays whenever possible) and I know that preallocation was a significant performance improvement on older devices with older javascript engines.

kevmo314 1 day ago 1 reply      
> Now, you would assume that for both p1 and p2 the same hidden classes and transitions would be used. Well, not really. For p1, first the property a will be added and then the property b. For p2, however, first b is being assigned, followed by a. Thus, p1 and p2 end up with different hidden classes as a result of the different transition paths. In such cases, its much better to initialize dynamic properties in the same order so that the hidden classes can be reused.

Does that mean an object with n properties takes up O(n^2) memory for the class definitions or O(n!) if the classes do not guarantee a property initialization order?

sjrd 1 day ago 0 replies      
For a more comprehensive reference on v8 internals, there is http://wingolog.org/tags/v8, which has been around for a long time. It is even directly referenced from https://developers.google.com/v8/under_the_hood.

I don't think there's anything in this post that wasn't already explained in this reference, except the fact that now there's Ignition and TurboFan, but that doesn't fundamentally change anything.

schindlabua 16 hours ago 0 replies      
Does this also mean that in the function that returns an object vs. prototype debate the former wins because object literals presumably require less class transitions?

let Ctor = function(a, b){ this.a = a; this.b = b;}


let obj = (a,b) => ({ a:a, b:b});

cryptozeus 1 day ago 0 replies      
Thanks for the post, great writing...
Inside a fast CSS engine hacks.mozilla.org
428 points by rbanffy  10 hours ago   117 comments top 13
crescentfresh 5 hours ago 7 replies      
I always wonder, who puts together nifty little blog posts on this kind of thing complete with graphics just for the article? By that I mean, literally what title do they have?

Myself and my colleagues would/could write up a technical breakdown of something neat or innovative we might have done to solve some problem at work, but we sure as shit can't make cool little graphics interspersed between opportune paragraphs, nor could we figure out how to make the thing entertaining to read.

Is this kind of thing done in coordination with like a PR/graphics department?

robin_reala 10 hours ago 1 reply      
I turned this on a couple of weeks ago on Nightly and have noticed precisely zero problems, and a really nice little speedup on CSS-heavy sites. Really good to see large chunks of parallelised Rust code start making their way over from Servo to Firefox.
fpgaminer 2 hours ago 0 replies      
Isn't it just crazy that we're gonna get all this cool tech in a browser that is completely free and open source?

And along the way, Mozilla created what is perhaps the most disruptive programming language of the past decade. For free. And open source.

It's really hard to appreciate the gravity of this.

lucideer 9 hours ago 5 replies      
It's great to see any company going into detail about their technical implementation, so I'm extremely hesitant to be critical, but I'm really curious who the target audience for this one particular article is.

It's a very very odd mix of language that sounds like it's directed at a very young child and standard technical speak. Not the usual for the Hacks blog.

Not to fault the article too much, but I just found the tone a bit confusing. Even veering towards condescension in some parts, though I'm certain that's entirely accidental and wasn't the author's intent at all.

sanxiyn 9 hours ago 3 replies      
You may want to actually read this code. You can start by searching "LayoutStyleRecalc" at https://github.com/servo/servo/blob/master/components/layout.... Following is verbatim copy.

 // Perform CSS selector matching and flow construction. if traversal_driver.is_parallel() { let pool = self.parallel_traversal.as_ref().unwrap(); // Parallel mode parallel::traverse_dom::<ServoLayoutElement, RecalcStyleAndConstructFlows>( &traversal, element, token, pool); } else { // Sequential mode sequential::traverse_dom::<ServoLayoutElement, RecalcStyleAndConstructFlows>( &traversal, element, token); }

pducks32 7 hours ago 0 replies      
This is really great of Mozilla. Im really excited to see such a large rust project used on such a scale; after that I think there will be few doubts its a really really impressive language. Also the fact that Mozilla knew this and decided to take such a bold step as rewrite their engine is super cool. Ive done rewrites and they never go well so hats off to them.
aembleton 9 hours ago 1 reply      
The writeup is inspiring. I found it very clear and yet reasonably in depth. It helps me to understand how much work modern browsers are doing.

Also, excellent use of Rust.

jancsika 9 hours ago 7 replies      
> 4. Paint the different boxes.

Is this really what happens under the hood?

1. If I overlap 52 html <div>s like a deck of cards, does the browser really paint all 52 div rectangles before compositing them?

2. If I overlap 52 <g>s like a deck of cards, does the browser really paint all 52 <g>s before compositing them?

3. In Qt, if I overlap 52 QML rectangles like a deck of cards, does the renderer only paint the parts of the rectangles that will be visible in the viewport? I was under the impression that this was the case, but I may be misunderstanding how the Qt QML scenegraph (or whatever it is called) works in practice.

edit: typo

tannhaeuser 9 hours ago 0 replies      
Congrats! Beyond the CSS engine itself, I also very much appreciate inside development stories like these. I'd also like to read a meta-story about the development efforts in terms of time spent, prior knowledge required etc., and CSS spec feedback, with a reflection on the complexity of implementing CSS from scratch.
t20n 6 hours ago 0 replies      
Haven't even read it, just looked at the drawings and now i know how a browser parses css.
om2 5 hours ago 2 replies      
I wish this post included some benchmarks or measurement.
kristofferR 7 hours ago 1 reply      
It's such a shame Firefox (including the nightlies) kills my Mac (making most other applications hang/break), since the new versions are otherwise way better than Chrome.

Does anyone know what it is about Firefox that makes the rest of my system unable to spawn new processes?

c-smile 9 hours ago 5 replies      
Parallel processing demonstrates benefits only if you have physical cores to run code on them. If just one core is available for the app then parallel processing is a loss due to thread preemption overload.

Is there any real life examples of achieved speedup?

Chrome Enterprise blog.google
310 points by pgrote  8 hours ago   193 comments top 30
redm 6 hours ago 14 replies      
I'm hesitant to invest anymore into the Google ecosystem after reading about how account termination can happen without detail, or recourse to resolve. [1] The last thing I need is more lock-in to a Google world.

[1] https://news.ycombinator.com/item?id=15065742

pducks32 7 hours ago 7 replies      
I thought this was a special version of Chrome the browser and I think many people will too. Especially someone like my brother who works at a corporation. If they told him theyre switching to Chrome Enterprise hed be a tad confused.

Side Note: the reading experience on this blog is one of the best Ive seen on mobile. Love the text size though the header animation was not the smoothest. Nonetheless great job.

twotwotwo 7 hours ago 2 replies      
One of my annoyances on consumer Chrome OS is that the built-in VPN support is tricky. There's a JSON format, ONC (https://chromium.googlesource.com/chromium/src/+/master/comp...), that maps to OpenVPN options. When I last used it the documentation was a bit tricky though it may have improved, I couldn't find ONC equivalents for some of my .ovpn options, and, most frustratingly, there was very little specific feedback if you try to import a configuration that isn't right. Because of all that I wonder if it was developed so Google could support specific large customers' VPNs (think school districts or companies) and its public availability was mostly an afterthought.

If you leave the GUI, you can also run openvpn yourself on a good old .ovpn file, but you lose some of the nice security properties you get with the default Chrome OS setup, you have to do cros-specific hacks to make it work (https://github.com/dnschneid/crouton/issues/2215#issuecommen... plus switching back and forth between VPN and non-VPN DNS by hand), and last I checked it made ARC (Play Store) apps' networking stop working.

I would consider paying a premium just to get my Chromebook connecting to work's VPN smoothly, though of course I'd love it if improved VPN functionality were available to everyone by default.

At some point I'm probably also going to take a second look at the latest ONC docs. It looks like they've improved since I first looked at VPN setup a while back.

jstewartmobile 6 hours ago 4 replies      
Sounds great until they shut your shit down without explanation, and all you're left with is a support number that is about as helpful as a brick wall...
tbyehl 6 hours ago 2 replies      
Is this just a re-branding of "Chrome device management"?

I wish they'd come up with something Family-oriented. I've got my mom, girlfriend, and girlfriend's children all using low-end Chromebooks / Chromebases as their primary computers, and I'm using one for about 80% of my computing. Chrome device management would be useful for us but $50/year per device plus needing to buy G.suite per user is a bit much.

Havoc 5 hours ago 1 reply      
A big chunk of business is dead in the water without Excel (and to a lesser extent Word/Powerpoint).

And no don't tell me google sheets. Great for sharing data...ultra crap for data manipulation.

pat2man 7 hours ago 0 replies      
This is probably the perfect OS for any shared terminal: libraries, internet cafes, etc. You don't need native apps, just a locked down browser that can keep your settings and bookmarks across devices.
solatic 5 hours ago 3 replies      
Lots of enterprises out there with many users who need nothing more than a web browser, email, light word processing and maybe slideshow software. Active Directory integration makes the migration possible. Chrome OS provides it all in a way which dramatically reduces maintenance costs compared to Windows.

If Google starts showing some reduced TCO figures, they'll start to pull a lot of converts.

niftich 7 hours ago 0 replies      
Notwithstanding the Active Directory integration, this is the clearest shot across the bow of Microsoft's on-prem management suite yet.

The naming is puzzling. But I'm sure MS shops are used to weird names, and aren't likely to get pedantic about whether or not there should be an "OS" in there. They likely went with the simpler name to build on mindshare among decision-makers, and to intentionally muddy the waters to their benefit.

devrandomguy 4 hours ago 1 reply      
On a related note, does anyone know how to bury a dead corporate user account? The company that gave it to me doesn't even exist anymore, but Google keeps insisting that "account action is required". The company terminated my login shortly before imploding, and I lost the associated phone number when I fled the country, so there is no way that I can get back in to shut it down myself.

I suppose I will eventually just buy a new phone in a few years, but I'm not thrilled about all that private work / business data that is sitting in limbo.

Multicomp 6 hours ago 2 replies      
$50 bucks per device per year? For what, extra management frameworks on a chromebox? What a bargain /s
MBlume 6 hours ago 1 reply      
I would really like to have a computer for use at work where my IT department could feel like they had assurance that it was secure/virus free/malware free but from which I could sign into my personal accounts without feeling like I'm opening them to my IT department. Right now I just carry two laptops in my bag and it's really annoying. Wondering if Chrome Enterprise will enable this sort of thing.
trequartista 7 hours ago 3 replies      
While there is Google Play Integration, there is no word on how they plan to integrate the corporate intranet - which is littered with thousands of custom applications ranging from payroll to HR to ticket and incident management.
bbarn 6 hours ago 1 reply      
I suspect Active Directory integration might make this actually have legs. Especially in the educational industry.
bedhead 3 hours ago 0 replies      
Sounds great until you realize that their "Hate Algorithm" or whatever will end up erroneously shutting down your computer one day.
jaypaulynice 2 hours ago 0 replies      
$50/device?? With that said, I suspect Facebook is working on a browser...that could compete well with Chrome...any reason why Facebook hasn't developed a browser?
gangstead 6 hours ago 0 replies      
I don't believe the checkmark indicating "Cloud & Native Print" support on Chrome OS. I've got two Chromebooks and have used Chromeboxes at work and have never gotten printing to work reliably.
chaudhary27 4 hours ago 1 reply      
I don't like to lock into Google ecosystem at work but I also hate some Microsoft services at work.
massar 6 hours ago 1 reply      
I hope they finally acknowledge the Security Bypass they have in this "Enterprise" version... where it will be even more serious


It is fun to report those things to Google Project Zero and then find that people on that side obviously do not understand that security bypasses are... well... security issues.

full submission reproduced below, just in case they radar-disappear the item... duping items is apparently what Project Zero does so that the items disappear from Google results...


Thank you for an amazingly solid looking ChromeOS. Happy that I picked up a nice little Acer CB3-111, thought about plonking GalliumOS/QubesOS or heck OpenBSD on it, but with the TPM model and the disk wiping, not going to.

Just wanted to note this discovery so that you are aware of it and hopefully can address the problem as it would improve the status quo. Keep up the good work!

Greets, Jeroen Massar <jeroen@massar.ch>


By disabling Wireless on the login screen, or just not being connected, only a username and password are required to login to ChromeOS instead of the otherwise normally required 2FA token.

This design might be because some of the "Second Factors" (SMS/Voice) rely on network connectivity to work and/or token details not being cached locally?

But for FIDO U2F (eg Yubikeys aka "Security Key"[1]) and TOTP no connectivity is technically needed (outside of a reasonable time-sync). The ChromeOS host must have cached the authentication tokens/details though to know that they exist.

The article at [2] even mentions "No connection, no problem... It even works when your device has no phone or data connectivity."

[1] https://support.google.com/accounts/answer/6103523?hl=en[2] https://www.google.com/intl/en/landing/2step/features.html


Chrome Version: 59.0.3071.35 devOperating System: ChromeOS 9460.23.0 (Official Build) dev-channel gnawty Blink 537.36 V8


First the normal edition:- Take a ChromeOS based Chromebook (tested with version mentioned above)- Have a "Security Key" (eg Yubikeo NEO etc) enabled on the Google Account as one of the 2FA methods.- Have Wireless enabled- Login with username, then enter password, then answer the FIDO U2F ("Security Key") token challenge

All good as it should be.

Now the bad edition:- Logout & shutdown the machine- Turn it on- Disconnect the wireless from the menu (or just make connectivity otherwise unavailable)- Login with username, then password- Do NOT get a question about Second Factors, just see a ~5 second "Please wait..." that disappears- Voila, logged in.

That is BAD, as you just logged in without 2FA while that is configured on the account.

Now the extra fun part:- Turn on wireless- Login to Gmail/GooglePlus etc, and all your credentials are there, as that machine is trusted and cookies etc are cached.

And just in case (we are now 'online' / wireless is active):- Logout (no shutdown/reboot)- Login with username, password.... and indeed asks for 2FA now.

Thus showing that toggling wireless affects the requirement for 2FA.... and that is bad.


- Being asked for a Second Factor even though one is not "online".

As now you are walking through say an airport with no connectivity, and even with the token at home, just the username and password would be sufficient to login.


For the Google Account (jeroen@massar.ch) I have configured: - "strong" password

and as Second Factors: - FIDO U2F: Two separate Yubikeys configured - TOTP ("Google Authenticator") configured - SMS/Voice verification to cellphone - Backupcodes on a piece of paper in a secure place.

Normally, when connected to The Internet(tm), one will need username(email), password and one of the Second Factors. But disconnect and none of the Second Factors are needed anymore.


The Google Account password changer considers "GoogleChrome" a "strong" password.... might want to check against a dictionary that such simple things cannot be used, especially as 2FA can be bypassed that easily.....

booleandilemma 5 hours ago 1 reply      
I assumed this was an enterprise version of Chrome, with the main difference being it doesn't auto update, thus being more friendly to the IT departments who administer a company's computers.
ben174 7 hours ago 2 replies      
I've been seeing IT become increasingly frustrated at their inability to lock down the security on MacOS to the level they'd hoped. Wouldn't be surprised to see silicon valley startups issue Chromebooks out as the default in 3-4 years time. Especially if Google gets this right.
open-source-ux 4 hours ago 3 replies      
Not a popular opinion here I know, but I'll say it anyway. Not a single word in that blog post about privacy.

Chrome OS is already widely used in US schools (and tracks student online activities), now we have a 'business-friendly' version of Chrome OS.

What kind of analytics does a cloud OS like this record? What does Google do with that data? Even if that data is 'anonymised' (a pretty meaningless term nowadays), in aggregated form that gives Google staggering quantities of data that they can mine for the future. Why did Google not even mention the word privacy once in that blog?

demarq 4 hours ago 1 reply      
That is a very compelling price point.
killjoywashere 7 hours ago 0 replies      
David was working on the smart card authentication system for ChromeOS not too long ago. Glad to see this maturing.
hiram112 6 hours ago 1 reply      
I've always had the belief that the Microsoft juggernaut would continue its slow decline in relevance as mobile and web devices removed the need for Windows, and the improvement of apps like Google Docs, OpenOffice, etc. would eat away at Office from the other side.

But I really think now we're approaching the point where their fall might happen swiftly. Chromebooks are fine for the majority of corporate users. And if they catch on, there is no need for any of the Active Directory / Azure tie-ons that MS has been hoping would pull enterprise customers towards Azure, Office 365, and all the rest.

And even if Microsoft can convince customers to stay, they simply won't be able to charge the same prices they've enjoyed for decades now with the overpriced Office, Server, and Client access licenses.

And once an enterprise moves away from Active Directory and Office, I don't see any benefit of using the very expensive Sharepoint, Outlook, OneDrive, and other apps that have always been overpriced, but worth it as they integrated well together and saved companies more money via lower IT costs.

darkr 5 hours ago 0 replies      
> According to Ed Higgs, Interim Director of Global Service Delivery for Group IT at Rentokil: With over 500 Chromebooks in use in our organization, Chrome now forms part of our standard offering within Rentokil Initial."

500? Do you even lift bro?

frik 7 hours ago 1 reply      
Please change the title to "Chrome OS Enterprise" - it's not Chrome browser enterprise.
SingletonIface 7 hours ago 0 replies      
Oh look, it's Google Ultron!

Nope, guess it's not :(

sandGorgon 7 hours ago 1 reply      
This is obviously Android Enterprise.

the killer feature is obviously the Play Store - does anyone know if apps like Skype for Android work properly from the play store? Including video and audio ?

What about things like Duo and Allo ?

manbearpigg 2 hours ago 0 replies      
I have kept away from Google since they started promoting their alt-left political agenda. Just software for me, thanks. At least I found a new appreciation for the open source movement.
Why is this C++ code faster than my hand-written assembly (2016) stackoverflow.com
386 points by signa11  15 hours ago   168 comments top 17
abainbridge 13 hours ago 4 replies      
A couple of weeks ago I'd never heard of Peter Cordes. Now the linked article is the third time I've seen his work. He's doing a fine job of fixing Stackoverflow's low-level optimization knowledge. Not so long ago all I seemed to find there was people saying something like, "well, you shouldn't optimize that anyway", or, "modern computers are very complex, don't even try to understand what's happening".
kazinator 11 hours ago 3 replies      
TL; DR: > If you think a 64-bit DIV instruction is a good way to divide by two, then no wonder the compiler's asm output beat your hand-written code.

Once (maybe 25 years ago?) I came across a book on assembly language programming for the Macintosh.

The authors wrote a circle-filling graphic routine which internally calculated the integer square root in assembly language, drawing the circle using the y = sqrt(r * r - x * x) formula!

What is more, the accompanying description of the function in book featured sentences that were boasting about how it draws a big circle in a small amount of time (like a "only" quarter of a second or some eternity of that order) because of the blazing speed of assembly language!

How could the authors not have used, say, MacPaint, and not be aware that circles and ellipses can be drawn instantaneously on the same hardware: fast enough for drag-and-drop interactive resizing?

payne92 11 hours ago 7 replies      
tl;dr -- the asm author used DIV to divide by a constant 2

More fundamentally: it's theoretically possible to at least match compiled code performance with assembly, because you could just write the code the compiler generates.

BUT, it requires a LOT of experience.

Modern compilers "know" a lot of optimizations (e.g. integer mult by fixed constant --> shifts, adds, and subtracts). Avoiding pipeline stalls requires a lot of tedious register bookkeeping, and modern processors have very complicated execution models.

It's almost always better to start with a compiler-generated critical section and see if there are possible hand optimizations.

bluedino 12 hours ago 1 reply      
>> Have you examined the assembly code that GCC generates for your C++ program?

A very polite way of saying, "why are you even using assembly, when you don't understand assembly?"

AdmiralAsshat 12 hours ago 2 replies      
The question was more interesting than the answer.

tl;dr version--the author's hand-written assembly was poor.

I guess the more interesting takeaway is "Just because it's assembly doesn't mean it's good assembly."

ericfrederich 12 hours ago 0 replies      
For fun I ported the C++ to Python and Cython without any kind of mathematical or programmatic optimizations. C++ was 0.5 seconds, then Python was 5.0 seconds. Cython, which was the same exact code as Python except sprinkled with "cdef long" to declare C types, was just 0.7 seconds.
SeanDav 12 hours ago 3 replies      
General comment and not aimed at this specific instance:

Just because you are writing in assembler, does not mean it is going to run faster than the same code in a compiled language. There has been decades of research and who knows how many man-years of effort that has gone into producing efficient compiled code from C, C++, Fortran etc.

Your assembly skills have to be of quite a decent order to beat a modern compiler.

BTW: The answer to the question on Stack Overflow by Peter Cordes is a must-read. Brilliant.

iamjk 11 hours ago 0 replies      
The people who write "article answers" like this on SO are the real MVP's of the web.
raphlinus 12 hours ago 8 replies      
Apologies if this is somewhat off-topic for the thread, but I suspect this will be a fun puzzle for fans of low-level optimization. The theme is "optimized fizzbuzz".

The classic fizzbuzz will use %3 and %5 operations to test divisibility. As we know from the same source as OP, these are horrifically slow. In addition, the usual approach to fizzbuzz has an annoying duplication, either of the strings or of the predicates.

So, the challenge is, write an optimized fizzbuzz with the following properties: the state for the divisibility testing is a function with a period of 15, which can be calculated in 2 C operations. There are 3 tests for printing, each of the form 'if (...) printf("...");' where each if test is one C operation.

Good luck and have fun!

bjoli 13 hours ago 1 reply      
I know is it is not the point of the question, but that problem would benefit greatly from memoization. Calculate it recursively and memoize the result of every step. With all the neat trickery that they are doing with assembly they could easily go sub 10ms.

I whipped together a short poc in chezscheme, and it clocks in at about 50ms on my 4 yo laptop.

elcapitan 13 hours ago 2 replies      
tldr: compiler replaces /2 with a shift.
msimpson 10 hours ago 0 replies      
> If you think a 64-bit DIV instruction is a good way to divide by two, then no wonder the compiler's asm output beat your hand-written code...

Compilers employ multitudes of optimizations that will go overlooked in hand-written ASM unless you, as the author, are very knowledgeable. End of story.

coldcode 9 hours ago 2 replies      
When I started programming on a Apple II+ assembly was important. Today there are likely only a few people in the world who truly understand what any particular CPU family is actually doing sufficiently to beat the compiler in some cases, and they probably are the ones writing the optimizer. But 6502 was fun to code for and the tricks were mighty clever but you could understand them.
m3kw9 8 hours ago 0 replies      
Because the complier has optimized it better than you.
takeda 5 hours ago 0 replies      
Not too surprising answer: "your assembly sucks"
smegel 4 hours ago 0 replies      
> but I don't see many ways to optimize my assembly solution further

I can't do it therefore it must be impossible!

barrkel 12 hours ago 0 replies      
This was a borderline help vampire question, but it ended up working out well, probably for nerd-sniping reasons.
Laverna A Markdown note-taking app focused on privacy laverna.cc
329 points by mcone  2 days ago   169 comments top 50
edanm 2 days ago 6 replies      
I'd really love a good Evernote alternative, but the one feature that tends not to exist is full page bookmarking / web clipping. I want to be able to clip a full page easily into the program, which will also save a copy of whatever article I happen to be reading. I really wouldn't mind (and would even love) to roll my own notes system with vim/etc. But without full page clipping, it would be a problem.

Another good thing about Evernote is the easy ability to mix in images, documents, and text.

The reasons I want to leave Evernote, btw, is:

1. I worry about their future and would rather a more open solution.

2. Their software, at least on Mac, really, really sucks. It's slow, and has tons of incredibly ridiculos bugs that have been open for a long time. E.g. when typing in a tag, if there's a dash, it will cause a problem with the autocompletion. For someone who uses the tags a lot and has a whole system based on them, having dashes cause a problem is a big deal, and the fact that it hasn't been fixed in ~ a year makes me really question their priorities.

yborg 2 days ago 3 replies      
Apart from having sync capability (via Dropbox) this in almost no way shape or form replicates the current capabilities of Evernote. A more accurate title would be "Laverna: An open source note-taking application." This of course will not generate many clicks, since there are dozens of things like this, many of them better-looking and more mature.
zachlatta 2 days ago 16 replies      
I've given up on using any sort of branded app for notetaking. At best it's open source and the maintainers will lose interest in a few years.

When you write things down, you're investing in your future. It's silly to use software that isn't making that same investment.

After trying Evernote, wikis, org-mode, and essentially everything else I could find, I gave up and tried building my own system for notes. Plain timestamped markdown files linked together. Edited with vim and a few bash scripts, rendered with a custom deployment of Gollum. All in a git repo.

It's... wonderful. Surprisingly easy. Fast. If there's a feature I wish it had, I can write a quick bash script to implement it. If Gollum stops being maintained, I can use whatever the next best markdown renderer is. Markdown isn't going away anytime soon.

It's liberating to be in control. I find myself more eager to write things down. I'm surprised more people don't do the same.

Edit: here's what my system looks like https://imgur.com/a/nGplj

trampi 2 days ago 1 reply      
Just FYI, more than one year has passed since the last release. The commit frequency has declined significantly. I use it, but I am not sure if I would recommend it in its current state. It does it's job and I like it, but the future is uncertain.
mikerathbun 2 days ago 2 replies      
I am constantly looking for a good notes app. I have been a paying Evernote user for years and I really like it. The only problem is the formatting. I take a lot of pride in formatting my notes and like it to look a certain way depending on the content. Markdown is definitely the way I want to go which Evernote has promised in the past but still hasn't delivered. That said note of the buttons on Laverna seem to work on my Mac. Can't sign into DropBox and can't create a notebook. Oh well.
omarish 2 days ago 0 replies      
The encryption seems very insecure. I just tried turning on encryption and it revealed my password in the URL bar. And now each time I click on a new page, it shows my password in the URL bar.


itaysk 2 days ago 6 replies      
There are so many note taking apps and yet I still can't find one I like.My requirements are simple:

- Markdown- cross platform with sync- tags

I have settled on SimpleNote for now, but I'm not completely happy. It's mac app is low quality and doesn't have markdown, It's open source but they ignore most of the issues.Bear Notes looks cool but wasn't cross platform.

I am still looking. If this thing had phone apps (I'm on iPhone) I'd give it a go.

mgiannopoulos 2 days ago 0 replies      
This came up on Product Hunt today as well >> Turtl lets you take notes, bookmark websites, and store documents for sensitive projects. From sharing passwords with your coworkers to tracking research on an article you're writing, Turtl keeps it all safe from everyone but you and those you share with. <https://turtlapp.com/download/
bharani_m 2 days ago 1 reply      
I run a minimal alternative to Evernote called EmailThis [1].

You can add the bookmarklet or browser extension. It will let you save complete articles and webpages to your email inbox. If it cannot extract useful text, EmailThis will save the page as a PDF and send it as an attachment.

No need to install apps or login to other 3rd party services.

[1] https://www.emailthis.me

ernsheong 2 days ago 2 replies      
It doesn't do web clippings though.

Incidentally, I am building https://pagedash.com to clip web pages more accurately, exactly as you saw it (via a browser extension)! Hope this helps someone.

scribu 2 days ago 1 reply      
Would be interesting to do a comparison with Standard Notes, which seems to offer the same features.
devinmcgloin 2 days ago 0 replies      
I've been using Notion (https://www.notion.so) for a while and have nothing but good things to say.

- It's incredibly flexible. You can model Trello Task Boards in the same interface as writing or making reference notes.- They've got a great desktop client and everything syncs offline.- Latex Support- Programmable Templates- Plus there seems to be pretty neat people behind it

I switched to it 8 months ago or so and haven't really looked back.

trextrex 2 days ago 0 replies      
Last I checked Laverna, they had really serious issues with losing data after every update or so. I stopped using it after encountering one of these. Looks like a lot of these issues are still open:







Edit: Formatting

kepano 2 days ago 0 replies      
Recently went through the process of evaluating every note taking tool I could find. Settled on TiddlyWiki which is slightly unintuitive at first but very well thought out once you get it customized to your needs. Fulfills most of the needs I see people requesting on this thread, i.e. flat file storage, syncable via Dropbox, markdown support, wiki structure.
yeasayer 2 days ago 2 replies      
One of the biggest use cases of Evernote for me OCR notes with search. All my important checks, slips and papers are going there. It's seems that Laverna doesn't have this feature. So it's not an alternative for me.
tandav 2 days ago 0 replies      
I use plain .md files in a github "Notes" repo.I even don't render it, just using Material Theme for sublime text.


macawfish 2 days ago 1 reply      
For notes, I use a text editor and Resilio Sync/Syncthing.

It's great!

jasikpark 2 days ago 0 replies      
A ridiculously simple, but good notes app I've found is https://standardnotes.org
twodave 2 days ago 0 replies      
I tend to use Workflowy.com for anything hierarchical/simple/listy and then Trello for anything bigger.

For instance, recently did some CTO interview screenings via phone. It was really easy to set up a Trello board with a card per candidate, drop them in the list matching their current position in the pipeline, attach a resume, recruiter notes, due dates etc. The interview itself I threw as a bulleted list into Workflowy and just crossed things off as they were covered. Took notes in notepad and uploaded to the Trello board at the end. Invited stake holders to view the board and sent out a daily email with progress. Interviewed 8 candidates this way in a total of about 10 hours, including all the time spent prepping and scoring and communicating with the hiring team.

barking 2 days ago 0 replies      
What are the main concerns people have about using evernote, data protection, the company going out of business, the code being closed and proprietary? I can understand all those but sometimes it also feels like everyone (me included) expects every software to be free now.

I have a free evernote account and don't use it very much but I find it handy for some things such as cooking recipes and walking maps. I think it would also be great for Dave Allen's GTD technique if I could ever be disciplined enough.

If evernote removed the free tier I think I would pay up, the pricing for the personal plans is very reasonable. I'd probably make more use of it too. Humans don't tend to value free stuff.For someone like me I think they'd have had a better chance of turning me into a paying customer if their model was an initial free period followed by having to pay up.But I will never pay up if I can get away with paying nothing.

ziotom78 2 days ago 0 replies      
I used to use org-mode to take down notes when I attended seminars or meetings (I'm an astrophysicist). However, a feature I missed was the ability to quickly take photos to insert into my notes, in order to capture slides or calculations/diagrams done on the blackboard.

Thus, last year I subscribed to Evernote (which provides both features), and I must say that I am extremely satisfied. Moreover, Evernote's integration with Firefox and Android allows me to quickly save web pages for later reading (this might be possible with org-mode, but not as handy as with Evernote, which just require one tap.)

I think that Laverna is interesting for users like me: it provides a web app with a nice interface, it implements the first feature I need (easy photo taking), and if really an Android app is on the way, integration with Android services might allow to save web pages is Laverna using one tap like Evernote.

bunkydoo 2 days ago 2 replies      
I'm still using paper over here, nothing seems to do it for me on the computer. Paper is great, and paper is king.
dade_ 2 days ago 2 replies      
I recently tried it again, Laverna is very buggy and I just received an email from dropbox noting that the api they used is being deprecated. The app isn't really native, just a chromium window running a local web app.

So if it needs to be mobile, I am using onenote, but have to use the web app in Linux, and search is useless on the web app. So for desktop only, I use Zim. Cross platform, lots of plugins, stores everything in a file system with markdown. I haven't been able to get SVG to render in the notes though, which would be awesome, then I could just edit my diagrams and pictures with Inkscape. I can read the notes on mobile devices as they are just in markdown, but a mobile app really is needed.

tardygrad 2 days ago 0 replies      
I'm going to give this a go.

Self hosted Dokuwiki has been my note taking tool of choice, usable on multiple devices, easy to backup, easy to export notes but markdown sounds good.

Is it possible to share notes or make notes public?

LiweiZ 2 days ago 0 replies      
Notes are data. We need ways to input and store it fully under user's control. And we need a much better way to get insight from our own notes.
perilunar 2 days ago 0 replies      
I gave up on Evernote after experiencing syncing problems. Now I just use the default MacOS and iOS notes.app. Seems kind of boring but it actually works really well, and is nicely minimal. Also its free, pre-installed, no sync problems, and has web access via iCloud when I need it.

But for the love of god, why did they make link colour orange instead of the default blue? And why cant it be changed via preferences? They had one job

tomerbd 2 days ago 1 reply      
I found google keep to be the best for small notes without too much categorization, and google spreadsheet to be the best for larger scoped note taking due to the tabs.
anta40 2 days ago 0 replies      
I still use Evernote on my Android phone (Galaxy Note 4), mainly because of handwriting support.

For simplistic notes, well Google Keep is enough.

Still looking for alternatives :)

paulsutter 2 days ago 1 reply      
What I really really want is a tool that keeps notes in github, therefore an open/standard/robust way to do offline, merge changes, resolve conflicts.

I've lost so much data from Evernote's atrocious conflict resolution that it's my central concern. I don't see any mention of that here.

Use case: edit notes on a plane on laptop, edit notes on phone after landing, sometime later use laptop again and zap.

chairmanwow 2 days ago 0 replies      
Using the online editor on Android with Firefox is essentially unusable. It feels almost like Laverna is trying to do autocorrect at the same time as my keyboard. Characters appear and disappear as I type which makes for a really confusing UX.
djhworld 2 days ago 0 replies      
org-mode works well enough for me. It's a bit awkward at first and requires you to remember a lot of key combinations and things, but it does the job.

It doesn't work so well across devices (especially mobile), so I tend to carry around a small notebook, and then when I'm back at my computer I type anything useful that I'd captured in my notebook into org mode.

Sometimes I just take a picture of my notes in my notebook and then use the inlineimages feature to display the image inline, that works pretty well too although there's no OCR.

It seems to work OK.

pacomerh 2 days ago 0 replies      
I'm very happy with Bear notes. Will give this a shot though.
jusujusu 2 days ago 0 replies      
Title is making me post this:http://elephant.mine.nu

Cons: no mobile app, no OCR for docs, no web clipper

snez 2 days ago 1 reply      
Like what's wrong with the macOS Notes app?
mavci 2 days ago 0 replies      
I exported my contents and I found my contents in plain text. I think exported contents should be encrypted too.
devalnor 2 days ago 0 replies      
I'm happy with Inkdrop https://www.inkdrop.info/
nishs 2 days ago 0 replies      
The macOS and web application don't look like the screenshot on the landing page. Is there a theme that needs to be configured separately?
pacomerh 2 days ago 0 replies      
Bear notes is free if you don't sync your devices and it supports markdown well. Very clean app.
pookeh 2 days ago 0 replies      
I have been using Trello. To save a screenshot, I Ctrl+Cmd+Shift+4 the screen, and paste directly into a card. It's fast.
Skunkleton 2 days ago 1 reply      
We have had this application for a long time. It is called a text editor or a word processor.
znpy 2 days ago 0 replies      
Very cool!

Just wanted to say that the nodes app in nextcloud is very handy too!

Actually, if Nextcloud could embed this Laverna somehow... that would be awesome.

5_minutes 2 days ago 0 replies      
I love Evernote for its ocr capabilities, so I can go paperless. But it seems this is not implemented here.
ehudla 2 days ago 0 replies      
The two must haves for me are integration with org mode (as was mentioned in thread) and with Zotero.
4010dell 2 days ago 0 replies      
I like it. Better than evernote. evernote was like trying to win a marathon running backwards.
Brajeshwar 2 days ago 1 reply      
laverna.app cant be opened because it is from an unidentified developer.


nodomain 2 days ago 0 replies      
Last release 1 year ago... seems dead, right?
lewisl9029 2 days ago 0 replies      
It's really cool to see another app using remoteStorage for sync! I built Toc Messenger a few years ago on top of remoteStorage for sync as well, and it was a pleasure to work with (https://github.com/lewisl9029/toc, the actual app is no longer functioning since I took down the seed server quite a while ago). Unfortunately, it seems like the technology hasn't gained much traction since I last worked with it. The only 2 hosts listed on their wiki that offer hosted remoteStorage are the same that I saw two years ago: https://wiki.remotestorage.io/Servers

The other alternative sync method offered is Dropbox, and if it's also using the remoteStorage library as the interface as I'm assuming, it would have to depend on their Datastore API, which has been deprecated for more than a year now AFAIK (https://blogs.dropbox.com/developers/2015/04/deprecating-the...). Is that aspect of the app still functional? If anyone knows any other user-provided data storage APIs like Dropbox Datastore or remoteStorage that's more actively developed and supported, I'd love to hear about them.

The concept of apps built on user-provided and user-controlled data-sources, envisioned by projects like remoteStorage and Solid (https://solid.mit.edu/), has always been immensely appealing to me. If users truly controlled their data, and only granted apps access to the data they need to function (instead of depending on each individual app to host user data in their own locked-off silos), then switching to a different app would be a simple matter of granting another app access to the same pieces of data. Lock-in would no longer be a thing!

Imagine that! We could have a healthy and highly competitive app ecosystem where users choose apps by their own merit instead of by the size of their moat built on nothing but network effects. Newcomers could unseat incumbents by simply providing a better product that users want to switch to. Like a true free-market meritocracy!

Sadly, this is a distant dream because both newcomers and incumbents today realize the massive competitive advantage lock-in and network effects afford them. Incumbents will never give up their moat and allow the possibility of interop without a fight, and newcomers all end up racing to build up their own walled-off data silos because they have ambitions to become an incumbent enjoying a moat of their own one day. Even products that are built on top of open protocols and allow non-trivial interop tend to eventually go down the path of embrace, extend, extinguish, once they reach any significant scale.

I'm starting to think strong legislation around data-portability and ownership may be the only way a future like this could stand to exist, but the incumbents of today and their lobbying budgets will never let that happen.

rileytg 2 days ago 0 replies      
while the demo worked well, under the hood looks like a somewhat aging codebase
loomer 2 days ago 0 replies      
>Laverna for android is coming soon

I'd probably start using it right now if it was already available for Android.

krisives 2 days ago 0 replies      
Download no thanks
Analyzing Cryptocurrency Markets Using Python patricktriest.com
341 points by quotable_cow  1 day ago   52 comments top 14
Galanwe 1 day ago 1 reply      
My 2 cents:

- it is not really pertinent to compute the correlation between prices. This takes the currencies trend into account since prices, contrary to e.g. "returns", are non stationary. This will lead to a biased higher correlation. Just do a ".pct_change()" before the correlation.

- also averaging the price between exchanges is a bit naive. It hides arbitrage opportunities and does not reflect the underlying traded volume. A VWAP would at least provide a better approximation.

plaidfuji 1 day ago 0 replies      
This would be more appropriately titled "downloading, cleaning, and plotting cryptocurrency price data using pandas and plotly in a Jupyter notebook."

TL DR the analysis is just a couple of time series correlation coefficient heatmaps.

That being said this is a great tutorial for people just starting out with data handling and analysis in pandas.

ghgr 1 day ago 2 replies      
It's great how readily available financial data is with cryptocurrencies! As a complement to this post, I've been recently working with Jupyter notebooks to analyze high-frequency trading activity in Bitcoin markets [1]. I've been listening to the GDAX socket since end July, so I have almost a month's worth of tick data (~ 7GB+ gzip compressed JSON data, it surely explodes to ~100 GB after extracting). If somebody is interested to carry out further analysis I can give you a link to download it.

[1] https://nbviewer.jupyter.org/github/ghgr/HFT_Bitcoin/blob/ma...

clarkmoody 1 day ago 0 replies      
(Shameless self-promotion)

You can watch a couple dozen the Bitcoin markets trade in real-time all on one chart with my site: https://bitcoin.clarkmoody.com/tickers/

It gets very interesting when the price really starts to move, since all the markets tend to move in lockstep. The response time reveals how active the trading bots are, making sure to reduce arbitrage opporunities.

bdfhjk 1 day ago 1 reply      
I'm surprised they didn't mentioned Machine Learning to further analyze the cryptocurrencies. Especially, that a few years ago an experiment shown, that if we would trade algorithmically on S&P500 via Machine Learning, traders could earn 8.5% returns comparing to the 5.6% by a random tactic. Here is a nice explanation: https://sigmoidal.io/machine-learning-for-trading/
grenkatost 1 day ago 4 replies      
I wanted to do something like that, eventually I did it on the elasticsearch.Then I wanted to share with the society and added grafana.

The final result is real-time analytics of trading on main exchanges and for major pairs: https://cointradeanalysis.com

indescions_2017 1 day ago 0 replies      
Don't think I've ever really seen the full $BTCUSD chart since inception like that. More than slight resemblance to Nasdaq-100 chart circa late 1999-early 2000. Just saying, exercise caution out there ;)

Next step: prediction! As an active research subject, crypto-currencies may be the ideal candidate for using deep learning to forecast non-stationary time series data.

Theory and Algorithms for Forecasting Non-Stationary Time Series


ativzzz 1 day ago 0 replies      
What's a good resource to learn the technical statistical background for an analysis like this? It's beyond the basics of calculating P values but it also builds on the basics by analyzing a large space of correlation coefficients.

Would this be considered data science? It doesn't dive into machine learning, or anything advanced computer science wise.

Finnucane 1 day ago 1 reply      
Is it possible to get data on economic activity in cryptocoins? Which is to say, not just the coin trading, but goods and services being paid for with coin tokens? Every now and then you see a post to the effect of 'so-and-so is taking payment in Bitcoin.' Does anyone actually do that? Or are the coins just hoarded for their investment value?
bkolobara 1 day ago 2 replies      
I'm a big fan of Stellar. It looks like the only cryptocurrency with an actual use case that works today and minimises costs for companies[1][2] building on top of it. Everything else looks like mostly hype driven. I would say that this is one of the reasons it trades so different from everything else, except Ripple.

As many developers hang out here I want to point out that the Stellar Foundation has a rolling competition for developers: https://www.stellar.org/lumens/build/. You can see here all the projects submitted to the last round: https://galactictalk.org/t/sbc2017april

[1]: https://tempo.eu.com/en[2]: http://chippercash.com/

johnt113 1 day ago 4 replies      
What's the point of correlation ? If I put 2 buys either I will have double profit or double loss. If I put 1 buy 1 sell I get zero.
mrchicity 1 day ago 2 replies      
Layering, Spoofing and Momentum Ignition are not "HFT Strategies." They're illegal market manipulation techniques, most often used by manual traders. You don't need to be ultra fast to bully prices around or enter large non bonafide orders. I've never heard of a legitimate proprietary trading firm (i.e. one that pays a salary and hires highly qualified people, not a boiler room operation) intentionally using these techniques. Most are in the arbitrage, market making or statarb business.

It would be much more interesting to see the author dive into which events lead to fast price changes. How quickly are price shocks on other markets are reflected on GDAX? How are changes in outright contracts like BTC/USD and ETH/USD reflected in ETH/BTC? Do USD denominated pairs drive price discovery more or less than EUR denominated ones, and does this change when US based traders are asleep? Lots of stuff to look at here.

fiatjaf 1 day ago 2 replies      
Nice and everything, but you can't make money with this, or can you?
krath94 1 day ago 0 replies      
Anyone having trouble with the link?
A Tutorial on Portable Makefiles nullprogram.com
321 points by signa11  2 days ago   103 comments top 16
erlehmann_ 2 days ago 5 replies      
An issue I have with make is that it can not handle non-existence dependencies. DJB noted this in 2003 [1]. To quote myself on this [2]:

> Especially when using C or C++, often target files depend on nonexistent files as well, meaning that a target file should be rebuilt when a previosly nonexistent file is created: If the preprocessor includes /usr/include/stdio.h because it could not find /usr/local/include/stdio.h, the creation of the latter file should trigger a rebuild.

I did some research on the topic using the repository of the game Liberation Circuit [3] and my own redo implementation [4] it turns out that a typical project in C or C++ has lots of non-existence dependencies. How do make users handle non-existence dependencies except for always calling make clean?

[1] http://cr.yp.to/redo/honest-nonfile.html

[2] http://news.dieweltistgarnichtso.net/posts/redo-gcc-automati...

[3] https://github.com/linleyh/liberation-circuit

[4] http://news.dieweltistgarnichtso.net/bin/redo-sh.html (redo-dot gives a graph of dependencies and non-existence dependencies)

carussell 2 days ago 3 replies      
> Microsoft has an implementation of make called Nmake, which comes with Visual Studio. Its nearly a POSIX-compatible make, but necessarily breaks [...] Windows also lacks a Bourne shell and the standard unix tools, so all of the commands will necessarily be different.

What I've been mulling over is an implementation of make that accepts only a restricted subset of the make syntax, eliding the extensions found in either BSD and GNU make, and disallowing use of non-standard extensions to the commands themselves (and maybe even further restricted still). In theory, a make that does this wouldn't even need to depend on a POSIX environmentit could treat the recipes not as commands but instead as a language. It wouldn't even take much to bootstrap this; you could use something like BusyBox as your interpreter. Call it `bake`.

Crucially, this is not another alternative to make: every bake script is a valid Makefile, which means it is make (albeit a restricted subset).

0x09 2 days ago 5 replies      
The problem comes in as soon as you need conditionals, which is likely when attempting to build something portably. There may be some gymnastics that can be done to write around the lack of their presence in standard make, but otherwise your options are:

- Supply multiple makefiles targeting different implementations

- Bring in autotools in all its glory (at this point you are depending on an external GNU package anyway)

- Or explicitly target GNU Make, which is the default make on Linux and macOS, is very commonly used on *BSD, and is almost certainly portable to every platform your software is going to be tested and run on. The downside being that BSD users need a heads up before typing "make" to build your software. But speaking as a former FreeBSD user, this is pretty easy to figure out after your first time seeing the flood of syntax errors.

brian-armstrong 2 days ago 2 replies      
Honestly, just use Cmake. It is far easier to make it work cross playform and better yet cross compile. There's no good reason to write a Makefile by hand and no large projects do it anyway
kccqzy 2 days ago 3 replies      
No one wants to manually do dependency management in even a moderately sized project. I really haven't found an ideal way to have these -MM -MT flags integrated into Makefiles; I've tried having an awk script automatically modify the Makefile as the build is happening, but of course the updated dependencies will only work for later builds, so it's only good for updating the dependencies. Any other approaches HNers used and really liked?
ainar-g 2 days ago 2 replies      
Doesn't cmake take care of most of this? Is there any reason not to use cmake on middle to large scale projects?

I am genuinely curious. I've only recently started looking at cmake, and it seems like they should generate portable Makefiles, or at least have an option to generate them.

thetic 1 day ago 0 replies      
> The bad news is that inference rules are not compatible with out-of-source builds. Youll need to repeat the same commands for each rule as if inference rules didnt exist. This is tedious for large projects, so you may want to have some sort of configure script, even if hand-written, to generate all this for you. This is essentially what CMake is all about. That, plus dependency management.

This isn't a case for CMake. It's a case against POSIX Make. The proposed "portability" and "robustness" of adherence to the POSIX standard are not worth hamstringing the tool. GNU Make is ubiquitous and is leaps and bounds ahead of pure Make.

kayamon 1 day ago 0 replies      
I love that their definition of "portable" is software that runs exclusively on UNIX.
c3d 1 day ago 0 replies      
This article barely addresses what really causes trouble in practice, namely non-portable tools. Sed for example has different switches on macos and linux. MinGW is another world.

Also check out https://github.com/c3d/build for a way to deal with several of the issues the author addresses (but not posix portability)

sigjuice 1 day ago 0 replies      
I like Tom Duff's http://www.iq0.com/duffgram/com.html for experimental programs contained in a single file.
majewsky 2 days ago 1 reply      
Wait... "%.o: %.c" is nonstandard?!?
cmm 2 days ago 2 replies      
where, except for Windows, is requiring GNU Make a problem?
JepZ 2 days ago 0 replies      
It's been a while since a I wrote a make file but as far as I remember it was very easy to create a full featured cmake file if the project used the layout which cmake assumed (easy for new projects).

However, porting existing projects from traditional make files to cmake could be next to impossible.

git-pull 2 days ago 1 reply      
More nifty portable Make facts:

- For portable recursive make(1) calls, use $(MAKE). This has the added advantage of BSD systems which can electively install GNU Make as gmake being able to pass in the path to gmake to run GNU Makefiles [1]

- BSD's don't include GNU Make in base system. BSD's port and build system uses Make extensively, and has a different dialect [2]

- In addition to that, you will likely choose to invoke system commands in your Makefile. These also have the same GNU-specific features that won't work on BSD's. So keep your commands like find, ls, etc. POSIX-compliant [3]

- Part of the reasons tools like CMake exist is to abstract not only library/header paths and compiler extensions, but also the fact POSIX shell scripting and Makefile's are quite limited.

- Not only is there a necessity to use POSIX commands and POSIX compatible Make language, but the shell scripting must also not use Bash-isms and such, since there's no guarantee the system will have Bash.

- POSIX Makefiles have no conditionals as of 2017. Here's a ticket from the issue tracker suggesting it in 2013: http://austingroupbugs.net/view.php?id=805.

- You can do nifty tricks with portable Makefile's to get around limitations. For instance, major dialects can still use commands to grab piped information and put it into a variable. For instance, you may not have double globs across all systems, but you can use POSIX find(1) to store them in a variable:

 FILES= find . -type f -not -path '*/\.*' | grep -i '.*[.]go$$' 2> /dev/null
Then access the variable:

 if command -v entr > /dev/null; then ${WATCH_FILES} | entr -c $(MAKE) test; else $(MAKE) test entr_warn; fi
I cover this in detail in my book The Tao of tmux, available for free to read online. [4]

- MacOS comes with Bash, and if I remember correctly, GNU Make comes with the developer CLI tools as make.

- For file watching across platforms (including with respect for kqueue), I use entr(1) [5]. This can plop right into a Makefile. I use it to automatically rerun testsuites and rebuild docs/projeocts. For instance https://github.com/cihai/cihai/blob/cebc197/Makefile#L16 (feel free to copy/paste, it's permissively licensed).

[1] https://www.gnu.org/software/make/manual/html_node/MAKE-Vari...

[2] https://www.freebsd.org/cgi/man.cgi?query=make&apropos=0&sek...

[3] http://pubs.opengroup.org/onlinepubs/9699919799/utilities/fi...

[4] https://leanpub.com/the-tao-of-tmux/read#tips-and-tricks

[5] http://entrproject.org

adamstockdill 1 day ago 0 replies      
target [target...]: [prerequisite...]
susam 1 day ago 0 replies      
We invoke shell commands in a Makefile, and if we are concerned about POSIX conformance in the Makefile syntax, we need to be equally concerned about POSIX conformance in the shell commands and the shell scripts we invoke in Makefile.

While I have not found a foolproof way to test for and prove POSIX conformance in shell scripts, I usually go through the POSIX.1-2001 documents to make sure I am limiting my code to features specified in POSIX. I test the scripts with bash, ksh, and zsh on Debian and Mac. Then I also test the scripts with dash, posh and yash on Debian. See https://github.com/susam/vimer/blob/master/Makefile for an example.

Here are some resources:

* POSIX.1-2001 (2004 edition home): http://pubs.opengroup.org/onlinepubs/009695399/

* POSIX.1-2001 (Special Built-In Utilities): http://pubs.opengroup.org/onlinepubs/009695399/idx/sbi.html

* POSIX.1-2001 (Utilities): http://pubs.opengroup.org/onlinepubs/009695399/idx/utilities...

* POSIX.1-2008 (2016 edition home): http://pubs.opengroup.org/onlinepubs/9699919799/

* POSIX.1-2008 (Special Built-In Utilities): http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3...

* POSIX.1-2008 (Utilities): http://pubs.opengroup.org/onlinepubs/9699919799/idx/utilitie...

The editions mentioned in parentheses are the editions available at the mentioned URLs at the time of posting this comment.

Here is a list of the commands specified in POSIX:

Special Built-In Utilities: break, colon, continue, dot, eval, exec, exit, export, readonly, return, set, shift, times, trap, unset

Utilities: admin, alias, ar, asa, at, awk, basename, batch, bc, bg, c99, cal, cat, cd, cflow, chgrp, chmod, chown, cksum, cmp, comm, command, compress, cp, crontab, csplit, ctags, cut, cxref, date, dd, delta, df, diff, dirname, du, echo, ed, env, ex, expand, expr, false, fc, fg, file, find, fold, fort77, fuser, gencat, get, getconf, getopts, grep, hash, head, iconv, id, ipcrm, ipcs, jobs, join, kill, lex, link, ln, locale, localedef, logger, logname, lp, ls, m4, mailx, make, man, mesg, mkdir, mkfifo, more, mv, newgrp, nice, nl, nm, nohup, od, paste, patch, pathchk, pax, pr, printf, prs, ps, pwd, qalter, qdel, qhold, qmove, qmsg, qrerun, qrls, qselect, qsig, qstat, qsub, read, renice, rm, rmdel, rmdir, sact, sccs, sed, sh, sleep, sort, split, strings, strip, stty, tabs, tail, talk, tee, test, time, touch, tput, tr, true, tsort, tty, type, ulimit, umask, unalias, uname, uncompress, unexpand, unget, uniq, unlink, uucp, uudecode, uuencode, uustat, uux, val, vi, wait, wc, what, who, write, xargs, yacc, zcat

Love it or hate it, truckers say they cant stop listening to public radio current.org
273 points by pesenti  1 day ago   262 comments top 24
throwaway2016a 1 day ago 15 replies      
I love NPR. My wife makes fun of me because sometimes I sit in the car with the battery / radio on after I park just to hear a story finish.

New Hampshire Public radio has a lot of local news and features too, which I can't hear anywhere else. I'm sure other markets are the same.

You can get it on Alexa too because most of the shows are also podcasts which is also great.

I've also heard a lot from my conservative friends that NPR is too liberal but to be honest, I haven't seen that at all. The more entertainment like programs like "Wait Wait" do sometimes have liberal jokes but I think anyone who can take a joke would laugh at them even if they are conservative.[1]

[1] I'm a Libertarian and thus social liberal and economically conservative which puts me in the great position of being able to laugh at jokes at the expense of either of the two major parties.

ourmandave 1 day ago 2 replies      
And I said, Well, so why dont you stop listening? Murphy continued. And he says, I cant, because its the only station that will go on mile after mile and I can pick it up again.

Just drove from IA to MI and it's true. Between 90.3 and 92.7 FM there's always an NPR station waiting.

Or I could just download their app and listen anywhere / anytime.


patrickg_zill 14 hours ago 2 replies      
This is a PR piece. It has no correlative relationship to reality.

Source: knowing truckers, having them in my family, extended family, friends of family etc. for 40 years. They all have XM/Sirius at this point. None listen to NPR.

Why were no statistics about truckers and NPR quoted? Because they wouldn't have supported the thrust of the article.

lsdafjklsd 22 hours ago 5 replies      
my dad is a fox news republican, a few years back we were driving for 5 hours and were listening to npr. he was basically waiting for the overt liberal bias, but instead we listened to a bunch of fascinating stories and interesting shows about a variety of topics. I read somewhere that npr isn't liberally biased, but their fan base is mostly liberal, because liberals tend to prefer news that has no bias. It's also not loud and obnoxious, which they like.
codingdave 1 day ago 1 reply      
> its the only station that will go on mile after mile and I can pick it up again.

I can vouch for that -- I just went on a road trip to visit all the state parks in Utah, and believe me, once you get out of Salt Lake, your choices are NPR, country music, or static. I listened to a lot of NPR.

sparcpile 1 day ago 0 replies      
Both XM and Sirius before they merged had a big following among truckers. Even after the merger, they still have shows and channels tailored to them. The big reason was that could listen to the same channels across the country.
ilamont 1 day ago 6 replies      
Surprised that more truckers haven't discovered podcasts.
fourmii 23 hours ago 0 replies      
A couple of years ago, I took my family on a road trip starting from Phoenix and ended up in SF. We drove north from Phoenix and hit a few of the amazing parks including Yellowstone and then west to Portland and then south to SF. And public radio was part of the fun for us. Aside from our beloved NPR, I loved how the only stations we could get gave us an insight on who probably lived in the areas we were driving through. I never really liked country music, but on that trip, I grew to actually appreciate it. Not to mention the number of new songs and artists we were able to discover. Some of those songs we now associate with the various places along the way.
0xbear 19 hours ago 2 replies      
Used to listen to NPR all the time, but now they just shit on Trump 24x7. It was fun in the beginning, but got old after a few weeks. I switched to audible, where I remain to this day, and will remain for the foreseeable future.
cool-RR 22 hours ago 0 replies      
Today I learned: If you start a sentence with "Love it or hate it," it instills a temporary feeling in the reader that he cares about the thing you were saying.
mrmondo 13 hours ago 1 reply      
Not strictly related to public radio specifically but still interesting on the topic of broadcast radio (and rather disappointing to say the least) - here in Australia we have a big problem with the (lack of) advancement of broadcast quality, while we have DAB+ broadcasts - the stream audio quality is so bad due to the low bit rate - youre actually better off switching back to FM.

The average Australian DAB+ radio stream is... 24-48Kbps! Yes its generally AAC+ which is about 30% more efficient than MP3 but its not even close to FM sound quality.

eighthnate 13 hours ago 0 replies      
It's always entertaining when stories of NPR or Foxnews or anything political comes up.

You always see comments like "my conservative father/mother/brother/friend/etc listened to NPR to find liberal bias and found none".

Or "my liberal father/mother/brother/friend/etc listened to foxnews and couldn't find conservative bias".

If your conservative family member or friend couldn't find anything liberal on NPR or your liberal friend couldn't find anything conservative on foxnews, then they must be hard of hearing and they should have their ears check by a doctor.

People are overcompensating too much or they are embarrassed by what they listen to. It's okay to admit that NPR is liberal. It's okay to admit that foxnews is conservative. It's why they exist. If NPR isn't liberal then it isn't doing its job and not serving their fanbase. If foxnews isn't conservative, then it isn't doing its job and not serving their fanbase.

It is just so obvious what people are trying to do and it is annoying.

Edit: Just a PSA... current was founded by the people who founded NPR or PBS. So I'd take what they have to say, especially about NPR or PBS, with a grain of salt.

"whose members were leaders in founding the PBS and National Public Radio."


perpetualcrayon 12 hours ago 0 replies      

 "its the only station that will go on mile after mile and I can pick it up again"
Captive audience. In my experience, when I've lived in very rural areas (and a lot of large metro areas), the only TV stations I could ever get over antenna were PBS, but more often than that I could only get FOX.

rdl 14 hours ago 0 replies      
I think truckers are one of the groups of people that really love Sirius/XM satellite radio, too -- uniform programming and coverage across the US, and if you are in your truck all the time, the monthly cost is trivial. Sirius even has special stations dedicated to truckers, as well as most of the ads added by the network being trucker-focused.

(If it were me, I'd go for audiobooks, though.)

Dowwie 15 hours ago 0 replies      
My family has listened to NPR and WNYC for years. I don't recommend anyone rely on it as an exclusive source for information as it does serve special political interests. Two examples that come to mind are the presidential primaries coverage (it was pro-HRC) and more recently coverage on the Saudi Arabia vs Qatar debacle (conveniently omitting Saudi involvement in global terrorism).
prevailrob 17 hours ago 3 replies      
Being from the UK, I take it NPR is the equivalent of Radio 4? (Albeit on a bigger scale, naturally)
nwatson 1 day ago 2 replies      
I listen to a number of NPR podcasts and perhaps I'm misremembering but it seems there's an uptick in the past few years of interviewing and discussing the lives of just "regular folks", people who work blue-collar jobs, people with conservative religious backgrounds, etc., ... and not in a disparaging way.

But ... Fresh Air with Terry Gross still largely wanders the fields of the cultural left with nary a nod to alternative viewpoints.

ComodoHacker 19 hours ago 0 replies      
Compare that to State radio and television in late USSR. It wasn't strictly obligatory. But there wasn't anything else, so you end up watching and listening to it anyway.

Imagine NPR as a giant propaganda machine, powerful and tuned up, but currently working in idle mode (or not?)

crispyambulance 14 hours ago 0 replies      
Not surprising.

I suspect the reason is that there is a lack of appealing alternatives to NPR on the "conservative" side.

If anyone has ever listened to conservative radio like Rush Limbaugh, you'll know that his slow-wit attempts at humor/satire are cringe-worthy failures unless the listener happens to be a septuagenarian with dementia. The other conservative options are preachers and conspiracy wack-jobs... is there anything else?

SoulMan 23 hours ago 1 reply      
Wonder if AM is covered everywhere throughout the country . Here in India radio in trucks was there for a very short period of time till side loaded cassettes I, CDnd side loaded MP3 took over. I think we never had great "talk" content.Even in 2017 I tune to BBC1 , world service and VOA in my car via internet but I get mocked a lot. Guess no one else does it. People here are used to side loaded MP3 music and not so intellectual Bollywood FM radio.
baursak 23 hours ago 2 replies      
As much as I like NPR, my politics have slowly drifted left over the years, and it's amazing how differently the same shows and hosts now sound to me through a more critical filter. To see what I mean, I recommend browsing through https://twitter.com/npr_watch (not my account, and I'm not affiliated in any way).
hprotagonist 1 day ago 2 replies      
Looking for a "bubble-free" media venue? Not a bad place to start.
bluetwo 23 hours ago 0 replies      
Wife used to make fun of me. Now she is adicted.
forapurpose 1 day ago 3 replies      
> Aside from the content, according to Murphy, drivers like NPR for the continuity. They can keep listening to the same programs from state to state.

Why don't truckers use satellite radio? They could listen to the same programs anywhere in the (U.S.? World?).

Hackers nab $500k as Enigma is compromised weeks before its ICO techcrunch.com
301 points by etherti  14 hours ago   224 comments top 33
che_shirecat 12 hours ago 8 replies      
How to make money in ethereum, from high to low risk:

1. Dump the leftovers of your bi-weekly software engineering paycheck into buying ETH, BTC, or whichever altcoin is popular this week. It went up 5000% in the past, it's got to keep growing right?

2. Participate in an ICO and stock up on whatever platform token they're hawking. It's more profitable if you get in early due to some presale mechanism (hopefully here you aren't sending your hard-earned digital currency to a hacker's wallet). Sell these tokens about 4-5 days after the sale closes, before the hype dies down and the bagholders realize they're holding sand.

3. Even more profitable is kicking off your own ICO. Go through the checklist - fancy HTML5 theme that you can buy off of Themeforest and edit the HTML a bit for the landing page, create a Slack channel/Twitter account/subreddit, write a "whitepaper" that is easy enough for the shmucks you're targeting to understand, yet replete with enough pseudo-academic crypto jargon and irrelevant/unnecessary mathematical symbols to get the shmucks nodding their heads and pretending to understand how this particular algorithm/equation based on the "turing-complete ethereum blockchain" will "change the world" or "bank the unbanked" or, more importantly to them, appreciate 500x in value. Don't forget listing the members of your team and advisors, ideally with as much credential signalling as you can - "MIT," "Stanford," "Comp Sci Phd," "McKinsey," all work here, fake it till you make it and make sure you list Vitalik Buterin on your list of advisors just for that extra bit of technical legitimacy. Use centuries-old sales tactics to pitch your ICO - butter up your target audience's sense of superiority by emphasizing exclusivity - they're the only clever ones, they're the genius computer nerds who understand the 1000x potential of your algorithm, they're the ones that are breaking free of the shackles of regulated securities. Create a sense of urgency with a ticking timer on your landing page, a 24-hour window to buy your monopoly money, a subtle/not-so-subtle hint that the earlier you get in, the more you'll make.

4. You could always just put on your black hat and rob these extremely soft targets blind. The simpler the method, it seems, the better. Plus, there's absolutely no risk of ever being held accountable - that's the beauty of anonymous cryptocurrency!

crypt1d 14 hours ago 3 replies      
This might as well be a scam created by the CEO himself. I mean, who in this 'crypto world' would be stupid enough to use the same, previously compromised, password on all his accounts?

P.S. there was a story on reddit (can't find it now unfortunately) about how the attackers tried to deposit the money to Bittrex but luckily someone alerted them and the exchange froze the account. So there is still some hope that funds will be returned.

richardknop 13 hours ago 7 replies      
Another ICO, another scam. I am not sure I feel bad for people who are gullible enough to send their hard earned money to these "companies". They have one PDF whitepaper and a generic Wordpress template website with some buzzwords, based in Cayman islands or some other tax haven for money laundering. And expecting to get rich from that.
kirualex 13 hours ago 1 reply      
How to make quick money in 2017:- Create a startup in the blockchain world- Make an ICO to raise money- Get "hacked"- ...- Profit!
djhworld 7 hours ago 0 replies      
Enigma is building a decentralized, open, secure data marketplace that will change how data is shared, aggregated and monetized to maximize collaboration. Catalyst is our first product and the first application running on our Enigma protocol. Powered by our financial data marketplace, Catalyst empowers users to share and curate data and build profitable, data-driven investment strategies.

So much said that explains so little.

jstanley 14 hours ago 2 replies      
From what I can tell, the ICO wasn't hacked as such. The ICO customers were just scammed.

Edit: title is better now

CryptoPunk 6 hours ago 1 reply      
HackerNews gets an unrepresentative picture of the token market. The only stories that get to the frontpage are the ones concerning hacks. But that's not the whole picture. There are a huge number of token sales happening, and the vast majority are not being hacked. This is certainly newsworthy but it needs to be put into the context of how many token sales occur.
JohnKacz 9 hours ago 2 replies      
Sorry for the uneducated question here, but I've wondered how those who steal crypto-currency "launder?" their ill-gotten coin. More specifically, how do they not get caught since the blockchain records everything? Take it out really quickly? Move little bits around to obscure ownership in some kind of shell game? Am I just totally off-base with my understanding?
ascendantlogic 7 hours ago 1 reply      
This is getting a lot of traction because anything related to crypto evokes really strong responses here, but this was a phishing attack. The fact it happened in the crypto space is largely secondary.

People get phished and get tricked into handing out bank account and credit card details all the time. It's not even newsworthy unless it happens on a large scale. This is only newsworthy because of the fact that it's crypto so people equate this with some sort of deficiency with the technology and/or ecosystem. That's not the case.

tlrobinson 12 hours ago 2 replies      
I'm not sure I buy the premise that every service needs its own coin. Surely in most cases it would be better to just use the most widely used, most stable coin?

If the concept of cryptocurrency is going to survive I think there needs to be one or two clear winners to eventually bring some stability to their value.

Of course, not issuing your own coin doesn't leave as much opportunity to get rich quick off a bit of hype.

ritarong 13 hours ago 1 reply      
It's amazing that this has happened multiple times before and yet people have not learned to be more careful. Greed and FOMO.
jacquesm 10 hours ago 0 replies      
Ah, the good old 'the hacker did it' story. Never fails. I suspect a pretty large fraction of the 'hacker did it' cases are inside jobs.
anovikov 13 hours ago 1 reply      
As a Russian proverb goes, "a thief stolen a thief's hat"
joosters 13 hours ago 1 reply      
It just sped up the inevitable losses from another scammy ICO. In this case, the hackers made the process more efficient.
wickedlogic 10 hours ago 0 replies      
User machine, not blockchain, security will continue to be the biggest risk in all these systems.

With gold for example, stealing the physical assets takes effort, resources, time, equipment, etc.

With digital assets, that is not the case... and our current level of system security is not adequate in the slightest. It is a challenge we are still largely ignoring today, but crypto currencies will require it be fixed, or better-risk-managed at any rate.

(not advocating gold over digital, but people continue to hand wave the actual risks)

NicenJehr 56 minutes ago 0 replies      
> 3. Weekly password rotation, and daily rotation in the week leading to the token sale

this seems useless

option 12 hours ago 1 reply      
The ICO concept is fundamentally solid and is more efficient then traditional funding sources.What currently lack is the implementation. Both technological and legal frameworks need lots of work, but I bet it'll happen
SirensOfTitan 13 hours ago 1 reply      
As with most things security, people tend to be the weakest link in the chain.

This type of issue could be solved in a lot of ways. I think a solution wherein:

1. ICOs use a standard 'escrow' contract wherein ether and coin get held by the contract for 7 days or so before either party can withdraw the opposite pair (where either can back out).

2. Building some standard 'ether address' widget that verifies the type of contract an address is. A user-wallet would usually be a warning sign.

nickbauman 12 hours ago 8 replies      
Cryptocurrency is probably doomed because the makers of cryptocurrencies have a fundamental conceptual disconnect with money: what it is, why it works and what it represents. Money only works when you have a powerful state actor enforcing the legality of the transaction. When you try to escape that, you get at best a parallel system that still goes back to the state for help in keeping functioning or a system prone to failure, fraud and speculation. To whit yet another one of these incidents.
hathym 14 hours ago 1 reply      
ICO is the new ponzi scheme
raesene6 13 hours ago 0 replies      
This looks like another in a series of ICOs which are not being handled with appropriate security controls.

When people are planning on taking in millions of dollars of investment in an easily traded, easily stolen, digital currency, they've got to expect attention from relatively well funded/motivated attackers.

Unfortunately many of the founders of these ICOs don't seem to be that well setup in this regard as some of the disclosed hacks, including this one, aren't exactly advanced.

EternalData 8 hours ago 1 reply      
There's a boom and a bust cycle when it comes to new technologies -- doubtless blockchain will have to go through the buzzsaw just like the early commercial Internet did in the early 2000s.
paultopia 12 hours ago 2 replies      
drngdds 6 hours ago 1 reply      
Do the victims ever get their money back after these cryptocurrency hacks/scams? I know crypto transactions are irreversible by nature, but do the coins ever get seized by law enforcement and returned to their owner? If not, that seems like a major problem. (I know they got around the DAO hack, but that's a unique case.)
OscarTheGrinch 13 hours ago 1 reply      
Initial Clown Outwitting
tzz 7 hours ago 2 replies      
There are a lot of scams on the web, but you don't blame the HTTP protocol. There are a lot of email scams, but you don't blame SMTP. Sad to see the dominant view of this community is against any type of cryptocurrency.
tdb7893 13 hours ago 5 replies      
I wish that passwords like this stopped being the main form of authentication. I guess I'm not sure what's a better way (I like the physical object + pin of my credit card but that's probably not practical for all Web authentication) but it seems pretty obvious that passwords are broken in their current form unless you use a password manager, which can be a hassle
jmilloy 14 hours ago 1 reply      
Can we get the title fixed? The editorialized title is misleading/inaccurate. (edit: Thanks, it's fixed now.)
EGreg 11 hours ago 2 replies      
Question: in light of the SEC decision regarding the DAO, is there any way to do an ICO that doesn't run the risk of later the whole company being shut down for not registering securities? Like maybe opening the company in Crypto Valley, Zug?

Is there a way to do a public offering of tokens? Or does it necessitate all the same reporting that a publicly listed company has?

Could still be worth it! Because the investors control even less of your board than in Snapchat IPO.

dmtzz 8 hours ago 1 reply      
what does nab mean?
throw2016 11 hours ago 0 replies      
Its appears crypto currencies have escaped the technical domain and have landed plum into nigerian scam territory.

The crypto currency ecosystem has become toxic and irrational propped up by ignorance, desperation and blind greed

I wonder what arguments will be made to third world countries at the next climate change summit when a large number of our population seem to be squandering electricity without pause in the hope of riches.

The only way any crypto takes off in the world we live in is if some powerful vested interest sees some use for it, at which point all the speculators having spent the better part of the past decade pushing fantasy narratives about freedom etc will sell out every single tall claim made for a dime. Those who do not understand history and in this case economies are condemned to repeat it, and badly.

mycosis 13 hours ago 1 reply      
we need to put a hold on all cryptocurrency startups until we figure out what the hell is going on
senatorobama 14 hours ago 1 reply      
So what, just a couple of years worth of work at Google as a SWE.
Workplace flexibility is the way to win the war for talent venturebeat.com
297 points by Mz  1 day ago   132 comments top 26
koliber 1 day ago 6 replies      
I find this to be more and more important the more experienced I get. At some point, I had a poignant realization. I work to earn money. I then use the money to provide for myself & my family, to buy things and experiences, and to build a safety net. I do these things to have a relatively stressless, happy, and fulfilling life.

If I just maximize for cash earned, I need to compensate for other things. I want to maximize free time and minimize commute. I have errands to run and things to do. Having flexible work, with a work from home option is worth more to me than making more money. Otherwise, I need to spend more on an apartment close to the office. A more expensive car can make a commute more bearable. I value peace and quiet, and city living is at odds with this. Reconciling all this while working a 9-5 at a central office in the city costs a lot of money.

I currently work at 15Five, where "Embrace Freedom & Flexibility" is one of our core values. We actually are active in living this and our other core values. You can feel the effect directly. I have spent a lot of time thinking about this, and arrived at the same conclusion as this article.

fogetti 1 day ago 4 replies      
Give me clear goals which are measurable, and I will get things done either remotely or on-site. It doesn't matter which one. This is a win-win situation because my manager can track my progress based on the previously setup measurements. And to achieve this, location is irrelevant.

Things usually start turning south when managers just simply demand speed. Or eagerness. Or dedication. Or other bullshit which has no relevance regarding any specific task.

These are the cases which brilliantly show the lack of management skills. These are the kind of little men who demand you to be at the office all time. You can call them control freaks, or micro-management fetishists. It doesn't matter. These people just show one thing with their behaviour: that they don't trust their subordinates.

shearnie 1 day ago 0 replies      
I'm currently working remote and took the job over an on-premises gig in the big smoke.

It's a 200 pay cut per day.

However factoring in expenses in fuel and parking and car wear, and the 10 hours of time lost commuting per week. I'm counter-intuitively better off by the equivalent 200 per day.

Reason being my time spent NOT commuting is invested in my bootstrapped startup. If I had to commute, I was essentially earning money by sitting in traffic rather than coding, which had to be spent to an offshore developer while I'm in a car instead of at my machine. So that including expenses in commuting will erode my net income. Not to mention the stress and health and mental performance impact commuting does to you. Sitting in traffic, cognitively processing the driving, finding a car park, walking ten minutes from a car park to premises.

My motivation and energy is sapped by the time I'm in the office. And more so by the time I get home.

Prior to this I was doing the commute to make sure the boss sees me gig for a year and compared to now, the difference in performance I notice is remarkable.

dalbasal 1 day ago 5 replies      
The problem with conversations like these.....

If you're going to be talking about "HR issues" out in the open, you can only say certain things. Things people like. The "skeptics of completely virtual organizations" or opponents of flextime, increased holidays, work-life balance, on-site childcare, employee empowerment, higher pay.... they can't exactly blog about it and get an applause. But clearly, they still set the agenda.

I want those things like everyone else does. I'm also pretty sympathetic to a lot of the arguments that they are good for business. But, we can't have a discussion when only one side speaks. I am somewhat skeptical about an existing large company virtualizing itself without major teething issues. I know very little about these thing, so who cares..

Anyway, where's the CEO blog on "why I make everyone work 9-5." or "never allow anyone to work from home." I know you're out there. Speak your piece!

trevyn 1 day ago 2 replies      
"If conversely you capped the salary of computer programmers, we would expect a flood of companies competing with every possible other amenity they were allowed to offer and managers being very very polite to computer programmers. (In fact this does happen with relatively better computer programmers, which tells us that something is bounding the salaries of top programmers underneath their purely financial equilibrium.)"
durgiston 1 day ago 3 replies      
I personally don't get the love for working at home/remote. I HATE working at home. To me, home is home and work is work. I don't want to mix them, and I feel awful when I have to stay home all day, or don't get to interact casually with my coworkers. Its just so depressing to be alone all the time.

That being said, a bad office environment is definitely a turn off, and at this point in my career free lunch and the ilk isn't that much of a perk anymore. I want reasonable hours, decent vacation and health-insurance, and a big income.

danieltillett 1 day ago 3 replies      
There is another alternative which is the moneyball approach [1]. Identify those people overlooked because they dont meet the current fashionable ideal and hire on the basis of what they can actually produce.

1. https://en.wikipedia.org/wiki/Moneyball

Corrado 6 hours ago 0 replies      
Thinking about it more carefully, I value working at different times or not even a contiguous time range. Sometimes I get up at 4AM and have all kinds of good ideas and am ready to go. In fact, I would probably get more done in 24 hours if I could break it up into 3-4 parts. This type of schedule is difficult, if not impossible, to achieve by going into the office.

I really like being around (most of) my coworkers and would probably feel left out if I worked from home all the time. That's why I really like the flexibility to work from any location at any time. If only I could convince the bosses that me not being in my desk, or answering chat/email questions right away, was the best way for me to work I would be happy. Hmmm... now I know what to ask in my next job interview. :)

gedrap 1 day ago 0 replies      
Just treat adults as adults, show a little bit of trust.

I don't like strict schedules, where you have to just sit through the required number of hours, or show up at some specific time for no reason (i.e. it does not affect other people). Don't force people to sit in a crammed open space office, all day every day, no matter the weather and other factors, because 'collaboration' or whatever. Commuting to the office when it is -20C definitely sucks. If someone wants to setup their ideal working environment at home - perfect, win win. Most of the time it doesn't even cost much.

Sure, making it work requires some effort from both parties. But it's too often ruled out for no reason (or some BS reason).

I made this switch and that's one of the best things that happened to my quality of life.

wslh 1 day ago 1 reply      
Flexibility was always our "secret" but even if we have people working remotely we prefer people who work at our offices. Few cabdidates are ready and responsible to work remotely. This is our experience.

So our other flexibility method is time. Most developers work 6 hours per day and that works.

vlokshin 1 day ago 0 replies      
This is important but so difficult. I'm one of the co-founders at Turtle(dot)ai. We have no problem attracting amazing developers who want total remote flex and to work 10-30 hours per week across different projects.

We absolutely have a hurdle with convincing most companies that output = output and that remote work is ok. Companies still put an unreasonable amount of value on butt-in-seat, 9-5 work.

Why? How can we get hiring managers and companies past this?

We try to convince with logic. We show what companies spend on hiring their own devs (20K+ hiring cost, 10-20K/mo salary). Startups tend to go hire-crazy after raising, but it's rare that they need to go 0 to 1 with full-time, local hiring. Going remote can save them a ton (on hourly and on not paying a salary for someone to twiddle their thumbs or show up before a boss does).

Our "easy way out" is to find companies who are already remote friendly. I remain convinced that this is a cultural hurdle we'll get over, but I'd be a liar if I told you I had timing perfectly predicted.

snarfy 1 day ago 0 replies      
When they won't let me adjust my subs pay, this is about the only thing I can give them. I don't care what hours they work or where they work as long as they get their work done. They need to attend meetings but video conferencing is always used.
pascalxus 1 day ago 1 reply      
If there's such a huge war for talent, then why don't those employees have more bargaining power? Last I checked on glassdoor, salaries for SE are way below living costs in the bay area. And the vast majority of SE don't even have enough bargaining power to leave the bay area and work remotely. I strongly suspect, the "war for talent" is a huge myth, or perhaps, isolated to small specialties. I've worked for a company in the bay area that had huge growth and taken part in much of the hiring. We seemed to have more than enough talent available - so much so, we were actually turning some of it away. If anything, the problem was more - being able to recognize talent and hire it, rather than lack of supply.
RealityNow 1 day ago 1 reply      
This is why I never understood the obsession with working for the big prestigious corporations like Google and the unicorns. As far as I'm aware, they don't allow remote work and working from home without an excuse correct?
joeax 1 day ago 0 replies      
Hopefully, more employers are waking up to this reality. I'd bet there are thousands of devs out there willing to take a pay cut for the opportunity to work at home full time, some who have confided this to me personally. It actually makes financial sense too.

The true cost of commuting:http://www.mrmoneymustache.com/2011/10/06/the-true-cost-of-c...

nradov 1 day ago 0 replies      
While workplace flexibility is a good idea in general, this article is based on a false premise. Business isn't war, and there is little evidence that "talent" even exists as a specific quality.


gerbilly 1 day ago 0 replies      
In most jobs you are being paid for a blend of:

- Skills and abilities.

- Availability.

In workplaces where facetime is valued, they are maximizing availability.

Sometimes this can be legitimate, since you do have to collaborate with others and be available to answer questions etc.

However in my experience those workplaces which are obsessed with availability tend to be disorganized firefighting organizations that operate in an in 'all hands on deck' style.

cseanmccoy 1 day ago 0 replies      
I think rather than choosing one over the other a company has to find a balance. While working remotely offers employees flexibility whether it's the ability to manage different branches in different locations or as the article suggests a "reward," there's still value in human to human interaction. Digital means of communication offers instant service, it's mostly transparent, and there's limited chance of miscommunication or forgetting directives as the log of messages provides users with a written history of the conversation. At the same time, working collaboratively in an office of fellow employees yields to an established vernacular. Workers are acquainted with other workers styles, methods and tricks for business. There's more learning opportunities, I believe available for same place working, because a team of people are experiencing the same challenges in real time.
pklausler 1 day ago 0 replies      
It's not remote vs on-site for best performance from engineers, it's nice and quiet vs distracting. An open office jam-packed with noise makes remote work look way more attractive.
AndrewKemendo 1 day ago 0 replies      
We're a 100% remote company, and you would probably not be surprised at the extreme level of push back we've gotten from VC's about that, to the point where it's the reason they won't invest.

Now, that might be a BS excuse on their part, but in many cases I know it's not.

There are many many benefits to being a remote company, but it does take a particular kind of person to make it work. The best people for remote tend to be older and more highly skilled in my experience.

audiolion 1 day ago 0 replies      
This article seems loosely based on ideas from the Dropbox blogpost on open offices [0]. The connection being the mention of employers offering all kinds of perks and more office space or private space never is one of them, that it could be a useful perk to attract talent.

[0] Is the open office layout dead? - https://news.ycombinator.com/item?id=15060623

SirLJ 1 day ago 0 replies      
Once you cross into the 6 digit salary territory, free lunches and similar gimmicks are not going to cut it anymore... the only thing that keeps me in my current company is the working from home factor and all the perks it entitles like no traffic, home gym, after lunch naps, swimming at 17:00, etc
Arelius 1 day ago 0 replies      
Rad, where is our flexible work conference/meetup? I guess we'd want to do that remotely? Scheduling might still be difficult.
xchaotic 1 day ago 1 reply      
DHH from 37signals / basecamp agrees with that. I just wonder what happens when there is enough flexible companies offering similar terms. I guess it's back to square one then?
akshay1938 1 day ago 0 replies      
Rightly said
tweedledee 1 day ago 2 replies      
If you can't find people of talent then you're not paying enough.

I quit working because I couldn't get paid more than $300K working for someone else despite the fact that I make my employers way way more than that. I quit to do my own startup and made a ton of money. I'm now retired way too young. I'd love to go back to work because I like having a big impact on people's lives but now I'd need at least $500K to break even on taxes if I moved back to the US and no one is paying that for me.

Firefox 55 and Selenium IDE seleniumhq.wordpress.com
290 points by slgt  1 day ago   120 comments top 21
jgraham 1 day ago 1 reply      
There seems to be a certain amount of confusion on this thread, so just to be sure that it's clear, Selenium itself will continue to work with future releases of Firefox. This means that existing WebDriver-based browser automation remains compatible with with Firefox 55+. SauceLabs', and other testing-infrastructure-as-a-service providers' support for Firefox will be unchanged. The only thing that will stop working is the Selenium IDE which is a XUL extension that allows writing selenium tests without writing any code. This is undoubtedly useful to some people, but as the original post says, there are alternatives in development that are targetting a similar niche, so the situation is not as dire as the title implies (I imagine the title is worded this way to reduce the number of duplicate bug reports that the Selenium owners have to wade through on the subject of Firefox 55+ compat. for IDE).

To expand further on Mozilla's ongoing commitment to WebDriver, we employ one of the editors of the WebDriver specification [1], are making significant contributions to the WebDriver testsuite, and are actively working on our geckodriver implementation to ensure that we have a fully-featured standards-complaint implementation as soon as possible. I also know that other vendors are working on improving the spec compatibility of their implementations, so I think the future of WebDriver is very promising, with fewer differences between drivers that aren't the result of fundamental differences in the browsers under test.

[1] https://w3c.github.io/webdriver/webdriver-spec.html

jph 1 day ago 3 replies      
Selenium IDE is a tremendous help for projects that want integration testing, and want to enable any team member to write ballpark tests.

Selenium IDE saved us hundreds or thousands of hours.

For example, my team took an existing web app with no tests, and had the project manager and junior business analysis go through the entire app with the Selenium IDE, writing tests.

This enabled our dev team to start coding fast, and refactoring fast.

In parallel our QA team then took those tests and used them as first-drafts to create even better tests that were generic, maintainable, randomizable, and so forth.

snowl 1 day ago 3 replies      
Of course, Selenium (WebDriver) still works. However, this might be the push for people to investigate other test frameworks. I found TestCafe by DevExpress (https://devexpress.github.io/testcafe/) is a great tool and is also completely open source (MIT!). It's a shame that the community is so small because it's so powerful. Maybe this might be the push that moves some more people over?
Osmose 1 day ago 6 replies      
I'd be curious to see what the usage stats on the IDE are; my company (Mozilla, ironically) uses Selenium a ton, but as far as I know we write all of our tests by hand instead of with the IDE since it allows you to use patterns like Page objects to make your tests more maintainable in the long run. Although the last time I used the IDE was years ago, so it may be more flexible than it used to be.

Does anyone extensively use the IDE?

jackblack8989 1 day ago 1 reply      
I have a feeling that the kind of firms that were benefiting from Selenium IDE (hiring cheap labor to write tests as an afterthought) were not the kind of firms that would spend money to contribute.

This isn't sustainable.

danidiaz 1 day ago 7 replies      
Test recorders are an antipattern. They lure you with their "no programming required!" siren call, but the generated scripts invaribly end up as a fragile, poorly abstracted mess.

GUI tests suites, especially those created with recorders, are prone to become unmaintainable beasts full of flaky tests.

This old interview with Bret Pettichord makes good points about the use of recorders: https://youtu.be/s_CUPs6xAWw?t=590

fdim 1 day ago 1 reply      
For this exact reason I made an extension for chrome that covers some of it's features:


I'd rather tag elements of interest and record (and adjust manually) than write everything in code.

hardwaresofton 1 day ago 3 replies      
I wonder if there's anyone here from SauceLabs or or any other company that does browser automation as a service -- was this easy to see coming? Is everything pants-on-fire there at the moment?
t0mbstone 1 day ago 2 replies      
Wow, that's really sad to hear!

I wonder if, instead of abandoning the project, they could put together a patron sponsorship to fund further development?

I'm sure there are plenty of companies out there that rely on firefox and selenium and would be more than happy to help contribute?

sambe 1 day ago 1 reply      
I'm getting the impression that a lot of extension developers are not just complaining about these changes but also making the decision to shut down their development. Feels like it could be pretty bad news for Firefox.
slgt 1 day ago 1 reply      
In the last hours since submitting this link to HN, I started using the Selenium IDE made by Kantu:


It works well for what it does and covers basic recording/replay for the core commands. But it is a new project, and by no means a full replacement yet. Code on Github (GPL license).

sweep3r 1 day ago 4 replies      
This is what is going to happen to most extensions. Mozilla is bonkers, doing this. They're going to regret it.
8114Y 1 day ago 0 replies      
I am very sad to hear this. Yes, this is antipattern, yes, selectors are doomed, but I had so many good experiences of introducing automated integration tests with IDE to QA teams, who later moved to hand-written tests. Learning curve was small and benefits were quick.
fulafel 1 day ago 1 reply      
What are some good open source tools in the area of rapid development ofweb end-to-end tests?

I know about Robot Framework, which does not record and replay but instead uses an AppleScript type english-flavoured DSL. But still seems quicker than scripting Selenium.

rdiddly 1 day ago 2 replies      
Maybe instead of "will not be fixed" the headline should say "contributors needed?"
Aardwolf 1 day ago 0 replies      
Might it still work in SeaMonkey? Unless SeaMonkey also plans to change the extension mechanism...
bertolo1988 1 day ago 1 reply      
Why would they maintain it? Browsers are providing headless ways to accomplish the same.
brian_herman 1 day ago 1 reply      
Is there like a electron for firefox xul? There is actuallyis https://developer.mozilla.org/en-US/docs/Archive/Mozilla/XUL...

edit: answering my own question.

blubb-fish 1 day ago 2 replies      
We (ab)use Selenium on headless Linux servers for task automation where no API is available. I hope this isn't affected - otherwise I'll have to `sudo apt-mark hold firefox`.

Suggestions for alternative software for this purpose?

my2ndaccount 1 day ago 0 replies      
Selenium is fantastic for projects that need integration testing and enables to write ballpark tests.

Selenium IDE optimized our process that helped us save a lot of time.

norswap 1 day ago 0 replies      
The decision for Firefox to move to web extensions is stupid. The only reason I'm still using Firefox, which is technologically inferior (1) to webkit-based browsers are its high quality addons.

(1) Slower & crashes more, just more UX problems on websites all around (including youtube!).

And this Selenium thing is just another example of that.

You're really crippled in what you can do in Chrome. As a compulsive hoarder of bookmarks (about 2k last time I checked), it's important for me to have a bookmarks sidebar. Chrome doesn't have one, and it's not possible to add one via an addon.

Startups should not use React medium.com
326 points by nbmh  2 days ago   167 comments top 33
pluma 1 day ago 13 replies      
Nerds shouldn't write opinion pieces about subject domains they don't understand.

Seriously, stop this. Sometimes you just need to admit you have no idea what you're talking about and shut up.

The author honestly thinks using Preact or Inferno could protect them from patent lawsuits. Oh, wait, maybe "Facebook holds any software patents on the Virtual DOM or the React APIs" so better use Vue and Cycle.

Unless you actually know

1) which patents Facebook holds and

2) which patents are relevant to each framework/library (i.e. React and various its alternatives)

stop giving people legal advice about which library they should be using.

The cosmic irony would be if Facebook didn't hold any patents covering React to begin with but DID hold patents covering parts of Angular, Ember, Vue and Preact, over which they can sue who they like because Facebook never gave them a patent grant for those. Sounds far-fetched? It isn't because we don't know which actual patents these could be and who holds them.

Or for all you know Google might sue you. Or Apple.

This isn't a discussion, this is literally just a bunch of nerds ranting on the Internet about problems they don't sufficiently understand, playing Three Blind Men with the elephant that is Software Patents.

cbhl 1 day ago 5 replies      
It's worth noting this "you can't sue us for violating your patents if you use our non-free open source software" is working as designed.

Facebook claims that if every company adopted a React-like license, that software patents as we know it would basically die. It's worth noting that both Google and Facebook's patent lawyers are generally of the opinion that software patents are net bad, but differ in their opinions of how to express that intent without exposing their companies to additional risk from patent trolls.

If you want to be acquired, then this is the opposite of what you want. You file patents for every part of the product you can; you audit your dependencies to avoid copyleft (AGPL and GPL) and React-like licenses, so your software can be folded into a 100% closed source product or shut down or whatever your acquirer wants.

If you run a start-up, and you're worried about the React license, you should be speaking to your own legal counsel about the best way forward.

franciscop 1 day ago 1 reply      
The author is making assumptions about what Open Source is and what should or shouldn't be. While many developers would like Open Source to be about "creating communities to build better software together" (myself included), open source just means that everyone can read the code.

Different developers and companies might use Open Source for different reason, included but not limited to: reduce Q&A, brand relevance, increase hiring power, strategic positioning, ideals that code should be _libre_, etc. Some companies and devs might even want several of those!

In this line, Facebook is a private corporation who I think we all agree their main reason for releasing React.js or any code at all doesn't seem to be purely idealistic. I would say strategic position (the best tool in the dev world, notably against Angular) and increasing their hiring power are really high within their reasons to release Open Source.

It is patently absurd to tell companies what to do and patronizing to tell developers what to do. Also, something that I don't see anyone arguing for/against is why so many big companies, even ones competing with Facebook, can use React.js freely and without worries? It's a point that anyone arguing against React is conveniently ignoring but I'd love to hear about.

scandox 1 day ago 1 reply      
Trust. Trust. Trust and Trust again. My brain becomes exhausted within seconds of reading a licence. Not just because I'm lazy, but because I know that however closely I think I'm reading it, I probably won't be reading it closely enough to be 100% sure of my conclusions (viz. the differences of opinion here from people that actually have read this thing).

So what do I do? I trust certain organisations and I don't trust others.

No-one in their right mind can trust Facebook. You might as well trust the Ocean.

sheetjs 1 day ago 2 replies      
There was a time when React was Apache v2! https://github.com/facebook/react/blob/3bbed150ab58a07b0c4fa... shows that license.

Has anyone seriously explored forking React from the last Apache v2 version?

jasonkester 1 day ago 3 replies      
I really like the idea behind this license.

They want to see a world where software patents no longer exist. So they write a term into their licensing that makes it really difficult for people who do like software patents to use their stuff.

I think I will move my projects over to a similar license. The only thing I would change would be to broaden it to invalidate if your company sued anybody over any patent.

If everybody did that, maybe software patents would finally go away.

vim_wannabe 1 day ago 1 reply      
Does this mean I should primarily use services from startups that use React, so that they won't get acquired and the service shut down?
matthewmacleod 1 day ago 4 replies      

There are, AFAIK, no known patents on React. This means you can go ahead and sue Facebook for patent violations to your heart's content. The license they granted to you to use any of their patents applied to React (of which there are none) is terminated, and you can merrily continue using React.

If this is incorrect, and Facebook actually do hold patents on React, then all of the popular alternatives almost certainly infringe on them as well. So, the worst-case scenario is no different.

chrisco255 1 day ago 1 reply      
Do most software startups even have patentable technology? I'm rather curious about this. Most consumer and SaaS apps I know of are built on non-patented software so I generally question this advice.

The fridge example was a case in point of how ridiculously low the odds of any company getting into patent litigation with Facebook are. To go to battle with FB you're gonna need millions and it's going to take years. That's not a light decision.

danielrhodes 1 day ago 1 reply      
Are companies getting asked about React in M&A due diligence or has any lawyer recommended this, because otherwise this post is pure clickbait.
pluma 1 day ago 1 reply      
Aside from the validity of the article's claims about patents (see my other tirades about that) I'm not sure the point even makes sense.

React, the library, is at its core a glorified templating system. It provides plenty of escape hatches that make migration as well as inclusion of foreign UI components and libraries a breeze. It's stupidly simple to migrate away from.

If you are a high valuation startup looking to get acquired for your technology (rather than acquihired) I find it extremely unlikely your valuation hinges on your frontend code. And even if it does I find it extremely unlikely your frontend is tied so closely to React you won't be able to spend, say, 1MM replacing React with Vue or what have you (maybe at the cost of a little pizzazz).

If your frontend is animation-heavy, that likely doesn't live in React land. If your frontend is mostly static, it should be trivial to replace React as well.

If your startup is valuable, being sued over some frontend library is probably the least of your concerns. If the company looking to acquire you has enough cash in the bank to sue Facebook, they have far more than enough cash in the bank to replace React.

thomyorkie 1 day ago 3 replies      
> If all giants agreed to open source under the BSD + patents scheme, cross-adoption would grind to a halt. Why? If Google released Project X under BSD + Patents, and Amazon really liked it, rather than adopting it and losing their right to ever sue Google for patents, they would go off and build it on their own.

This seems like a reasonable argument, but it doesn't seem to have deterred several big name companies from using React. Airbnb, netflix, and dropbox for example.

amelius 1 day ago 0 replies      
I'm not using React for another reason. I don't agree with the way they treat their users (i.e., as a product).
npad 1 day ago 1 reply      
What happened to the "software patents are ridiculous and should never be granted" argument?

Now it seems that the same sort of people advancing the anti-patent argument are angry about FB's licence. This seems like pretty muddled thinking.

hoodoof 1 day ago 0 replies      
"So you've sewn up the market eh? Here's your check for $500million."

"But don't you want to know what technology we built it with?"


epicide 1 day ago 0 replies      
> If there is no chance of igniting a community, there is no reason to open source.

I see most of this article as a dangerous way of thinking, but especially the above.

The mentality I get from this quote (especially combined with its context) is basically: I should only open source something I'm working on if I can build a community around it (that I control/influence/benefit from).

Open sourcing your software should be the default. If I make a tool or small library/function, I would more look for a reason NOT to open source it. When I can't think of one, I will open it up, regardless of whether or not there is a "chance of igniting a community".

CityWanderer 1 day ago 3 replies      
What makes the PATENTS file legally binding? If I install React via NPM/Yarn, or even as a dependency of another project, I will not see this file.

LICENSE is a pretty common convention and you could argue I should seek out this file in every one of my dependencies' dependencies - but how would I know to look for PATENTS?

Are all statements in the code base legally binding? Could they be hidden in a source file somewhere?

skrebbel 1 day ago 0 replies      
This is a badly written article full of FUD. It's written by an angry backend engineer, not a lawyer, and it shows.

He goes from this:

> The instant you sue Facebook, your patent rights for Reactand any other Facebook open source technology you happen to use)are automatically revoked.

To this:

> If you use React, you cannot go against Facebook for any patent they hold. Full period.

"Full period", really? Because the first does not imply the second. This is now how patent law works.

Now, I'm not a lawyer either, but broad assertions like these should tell you that there's emotion at work here, not reason. In his fourth update, he made a list of companies that add something about patents to their open source licenses, implying that somehow that that proves something.

So the thing that people confuse here is patents and copyrights. The BSD license grants you the right to use works copyrighted by Facebook people and contributors. The patents clause, further, promises that should Facebook hold any patents that cover the OSS, they won't use them against you, unless you sue them first.

There is the whole idea floating around the internet that a BSD license somehow ensures that nobody will sue you for patent infringement. I really don't understand where this comes from. Hell, Android is Apache Licensed (which includes a patent grant) and still anyone who makes an Android phone has to pay license fees to all kinds of patent trolls (Microsoft most notably). These things are totally separate.

So first, if you sue Facebook for patents, you lose their patent grant (so they can sue you back, which everybody always does anyway - it's the only defense companies have in patent wars). But you don't lose the BSD license or anything. That's not how it works. All you lose is Facebook's promise not to sue you because you use React.

Secondly, and this is the core point, patents don't cover code, they cover ideas. Any patents that Facebook might have that, right or wrong, cover React, will surely be written broad enough that they also cover Preact, Inferno, Vue.js probably, and I bet also Angular. Not using React but one of these other libraries therefore makes no difference - in both cases, Facebook can use their React-ish patents to sue you.

To my understanding, patent lawsuits rarely get to the nitty gritty details of actual patents in reality. It does not matter whether a Facebook patent written broadly actually covers Vue.js or not - in practice, more often than not, companies will compare the height of the patent stacks they have, and agree on a settlement based on that.

All this patent grant says is that Facebook gets to use their patents that cover OSS to make their stack of paper a bit higher. Like they would if they hadn't made a patent grant at all.

So, repeat after me: using open source does not shield you from patent infringment lawsuits.

BukhariH 1 day ago 3 replies      
Can someone please share what patents cover react?

Because if they're revoking patents that don't cover react then there should be no problem to continue using react right?

vladimir-y 1 day ago 1 reply      
Can the title be generalized? Like don't use anything from FB?
codingdave 1 day ago 0 replies      
Even if everything in this article were 100% correct, which is clearly arguable, think about how this would truly play out. Company X would sue Facebook. Facebook would sue them back for using React... and then... lawsuits would ensue. Attorneys would do their things. Cases would be argued out of court. Lots of legal stuff would be going on, and plenty of time would be had for the engineers to select and move to a new framework.

Yes, I think there are problems with the license, and I'm not using React. But do I really think those problems will result in some scenario where you have an overnight show-stopper of your business because of it? Extremely unlikely.

Startups need to stop fearing the law and start understanding it.

k__ 1 day ago 3 replies      
What is the safe alternative here?

I mean probably FB got patents.


Probably they have at least one that covers things React can do.

Almost every framework moved to components and virtual DOM.

So there is a big chance that any framework out there could infringe some of these React patents.

So their either can

revoke your React license when you sue them


Sue you over patent infringement if you don't use React

afro88 1 day ago 1 reply      
There were a lot of people in the older thread about the patents stuff saying things like "well, are you ever going to sue Facebook?? You don't need to worry about the patents stuff".

But consider this: Facebook do something disastrous, like leak a bunch of private or financial data and it affects you really badly. There's a class action against Facebook. Now you can't join it, because you don't wanna rewrite your app without React to ensure Facebook can't counter sue over a patent that may or may not exist on React.

williamle8300 1 day ago 0 replies      
Facebook is like the Disney in the tech world. They want to be that trove of intellectual property.

They take free-to-use stuff (Disney is cheap ripoff of Hans Christian Anderson's fables), and create "magical" stuff that they protect with their arsenal of lawyers.

If Facebook is able to pull the wool over our eyes this time... OSS is gonna be in a bad place in the next century just like how Disney single-handedly lobbied to change public domain laws in America.

tchaffee 1 day ago 0 replies      
I wonder if Facebook's claims that they are doing this in order to make patents useless would have legal standing. In other words, if they become "evil" about this patent clause at some point in the future and try to enforce this in the bad ways that people are imagining might happen, then doesn't Facebook's clearly and publicly stated intentions hurt any claim they would make which goes against those intentions?
guelo 1 day ago 1 reply      
This doesn't convince me. As a consumer patents and patent lawsuits are almost always bad. Patents reduce options in the market, lawsuits between companies waste resources, startups being acquired reduce market options. The only real argument is that it will prevent communities from forming. But I don't buy it. Open source needs competition too, monolythic ecosystems are bad. As an example, Apple didn't want to contribute to gcc so they created LLVM which is a boon to everybody.
blackoil 1 day ago 0 replies      
Someone with knowledge should bring clarity to all this noise!

My understanding is, if I sue FB for some patents, they can sue me back with any patents they may hold on React. We do not know of any such patents they own. So practically I am no safer if I use preact/vue or even Angular, since they may own some patents that cover those tech.

tldr; Do not sue FB unless you have muscles.

bitL 1 day ago 1 reply      
It truly seems non-mature businesses should stop relying on open-source with "baggage" and utilize only free software (AGPL3+) that has dual-licensing for commercial use with support as e.g. in Qt, unless you are 100% sure for your product lifecycle you won't get into direct business collision with the "baggage" author.
hoodoof 1 day ago 1 reply      
"Look, we were going to buy you for $500million but our thorough due diligence has turned over a rather nasty stone that you probably wished we didn't look under. You know what I mean don't you? YES - we found out your dirty little secret that you're using ReactJS. Due to this, we have decided to pull the deal in favor of your competitor who uses AngularJS. What you need to understand is that although you've cornered the market with your superb software and business model, we are dead serious about never buying companies that have built on ReactJS. We have a deep, and we think entirely valid, concern that Facebook will, at a point in time, suddenly pull the carpet from under you and Mark Zuckerberg will be laughing at us saying 'suckers... we sure got you with the whole ReactJS ruse didn't we!'"

"We're also not very enthused about you building on Amazon - surprised you'd take a risk like that, it doesn't indicate much business sense."

"Sorry to say, but your business, due to the ReactJS decision, is worth $0."

jlebrech 1 day ago 0 replies      
my reason is that your app doesn't need the whiz bang reactiveness of react of any other frontend framework just yet. it's just extra overhead.
halfnibble 1 day ago 0 replies      
I've been saying this for months. Don't use React!
notaboutdave 1 day ago 2 replies      
Easy workaround: Install Preact. No code changes required, at least not for me last year.
dimillian 1 day ago 1 reply      
Yeah because small startups will totally go after Facebook. Make sense. Wow.
Poland's oldest university denies Google's right to patent Polish coding concept pap.pl
320 points by Jerry2  2 days ago   80 comments top 18
CalChris 2 days ago 3 replies      
I'm not understanding how Google and employees is claiming to be the original inventor here.

Each inventor must sign an oath or declaration that includes certain statements required by law and the USPTO rules, including the statement that he or she believes himself or herself to be the original inventor or an original joint inventor of a claimed invention in the application and the statement that the application was made or authorized to be made by him or her.


How is Google+Co the original inventor?

nxc18 2 days ago 2 replies      
Wow, fuck Google. They really should consider re-adopting "Don't be evil" for PR proposes at the very least.

(This isn't too say other companies don't pull the same shit; fuck them all just as much)

wmu 2 days ago 0 replies      
Side note. When I was preparing biographies of Abraham Lempel and Jacob Ziv (the inventors of LZ77 and LZ78), I read an interview with Lempel. He was asked why they hadn't patented their algorithms. And he replied like this: we're scientist, our goal is to improve the world, not be rich. His answer surprised me. They clearly knew that the invention is remarkable and would be profitable, but deliberately made it free.
willvarfar 2 days ago 1 reply      
(For those interested in data compression, https://encode.ru is very active. This thread covers the rANS patent problems: https://encode.ru/threads/2648-Published-rANS-patent-by-Stor... )
aaimnr 2 days ago 0 replies      
I stumbled upon this edit war concerning Huffman Coding article on Wikipedia [1], where the ANS algorithm author (Jarek Duda) justifies his edits back in 2007 as a way to "shorten the delay for its [ANS] current wide use in modern compressors, leading to unimaginable world-wide energy and time savings thanks to up to 30x speedup."

Sounds dramatic, but today it seems like he had a point. The other guy (guarding Wikipedia against self promotion) has a point too, though.

[1] https://en.wikipedia.org/wiki/Talk%3AHuffman_coding

alecco 2 days ago 2 replies      
#3 168 points 7h IOCCC Flight Simulator

#72 280 points 6h Poland's oldest university denies Google's right to patent Polish coding concept

(had to scroll to middle of 3rd page)

(and it's #1 on Algolia 24hs top)

Makes sense, perfectly explainable.

woranl 2 days ago 1 reply      
Today's Google is a sugar coated evil corporation. "Don't be evil"... pathetic.
kuschku 2 days ago 0 replies      
This is related to the ANS patent of Google, which was previously discussed at https://news.ycombinator.com/item?id=14751977
agsamek 2 days ago 1 reply      
This post had 251 points in two hours. It was no 1 post for some time and now it was downgraded to 42ND position in the list. 2 hours after posting with 251points. How is it possible????
Cpoll 2 days ago 2 replies      
Can anyone explain Google's rationale here?

As I understand the US patent system, patent trolls can and do make these sorts of patent filings all the time, and the legitimacy doesn't matter, because their victims can't afford to defend themselves in court.

Isn't it irrational not to file patents like these?

Or is Google planning to use this patent "offensively?"

informatimago 2 days ago 0 replies      
Google, the universal evil company.

(That's where you realise emojis lack a pinky finger, that could become google's logo).

RandomInteger4 2 days ago 0 replies      
I don't understand how companies can be so bold as to file for patents on things that are already in industry use by more than the filer of the patent.
654wak654 2 days ago 1 reply      
Does the article mean ENcoding and not just coding?
userbinator 2 days ago 0 replies      
It's interesting that arithmetic compression and its variants seem to be a favourite of those looking for something to patent. From the description of ANS, it looks very similar to the QM/Q-coder for JBIG/2, JPEG, and JPEG2000, which was patented by IBM a long time ago (since expired.)
master_yoda_1 2 days ago 1 reply      
Someone should stop the monopoly of google in computer science and AI. Otherwise its going to be dangerous.
mirekrusin 2 days ago 1 reply      
Why this news, posted 2 hours ago, slided from front page to 40th position in about 2 minutes? It's got 251 points, 52 comments which is way more than anything on the front page?
Sylphine 2 days ago 2 replies      
e-beach 2 days ago 1 reply      
Sorry, but I wouldn't trust an article written by the Polish state media. The title of the article, labeling the idea a "Polish coding concept", clearly presupposes that Google's claim was baseless.
Initial Hammer2 filesystem implementation dragonflybsd.org
223 points by joeschmoe3  1 day ago   60 comments top 7
jitl 1 day ago 7 replies      
Very exciting to see implementation progress on HAMMER2. Some basics about the design:

- This is DragonflyBSD's next-gen filesystem.

- copy-on-write, implying snapshots and such, like ZFS, but snapshots are writable.

- compression and de-duplication, like ZFS

- a clustering system

- extra care to reduce RAM needs, in contrast to ZFS

- extra care to allow pre-allocation of files by writing zeros, something that will make SQL databases easier to run performantly on HAMMER2 than on ZFS

And much more. The design doc is an interesting read, take a look:


gigatexal 1 day ago 1 reply      
Dillon, I had always thought was a hack when he forked FreeBSD at 4.x but hes proven to have some novel ideas when it comes to these things and Im looking forward to trying out the production ready Hammer2 FS
TallGuyShort 23 hours ago 1 reply      
Other than the design doc (which, being BSD, is bound to be the primary source of truth), does anyone know of any tech talks or more visual presentations about the design of Hammer FS? I sure do love being available, but for just starting to wrap your head around an FS architecture, talking through some slides would sure be neat. I'm not immediately seeing much on YouTube...
alberth 1 day ago 0 replies      
It's amazing how much work the Dfly has acompkksjed given how few developers there are.

I really hope Dfly gets more adoption and broader use.

blue1 1 day ago 1 reply      
Does H2 feature data integrity (checksums etc)? For me that is one of the best features of ZFS
pmoriarty 1 day ago 4 replies      
Is there any work to get HAMMER2 on Linux?
beastman82 1 day ago 1 reply      
This sounds like a weapon system designed by Stark Industries.
UK Government's Payment Infrastructure Is Now Open Source cloudapps.digital
242 points by edent  18 hours ago   93 comments top 9
Nursie 16 hours ago 4 replies      
Still got Google Analytics on the page.

I do not feel that reporting every online interaction I have with my government in the UK, back to a huge corporate in the US, is in any way appropriate. But I can't even get anyone to engage on the issue.

When I tried to raise it I got directed to a helpdesk ticket on a site run by an SV helpdesk-as-a-service company.

I appreciate that gov.uk have done some great stuff getting the UK government online, and their designs and Open Source attitude are refreshing, but this is a a serious privacy issue.

chatmasta 11 hours ago 1 reply      
I went through a visa application process for the UK over the past few months. The main gov.uk site is a very good website for finding information, well designed, works on mobile, etc. Coming from the US, that was quite refreshing -- there's no equivalent in the US as everything is scattered across 100 different agency websites in 50 states.

However the "business logic" of gov.uk is still sorely lacking. For the actual visa application process and payment, I was bounced around between 4-5 different third party websites handling different aspects of the process. I'm sure further integration with gov.uk is on the roadmap, and it will certainly be nice.

As a new resident of the U.K., though, I have to admit I've been pleasantly surprised and very happy with the gov.uk website so far.

robin_reala 16 hours ago 1 reply      
If you havent heard of this before theres a good introduction to the project at https://gds.blog.gov.uk/2015/07/23/making-payments-more-conv...
sitepodmatt 11 hours ago 1 reply      
Every interaction I have with a gov.uk portal is a painful UX disater - most recently passport and driver license, both had a submitting payment stage. I can't imagine anyone saying 'wow look how at the gov.uk got it right' lets use their code, a glorified CMS system with forms and payments bolted on - badly - so badly.

Just rechecked it's still complete crap. They can't support the back button, no post / redirect pattern, confirm form resubmission. https://passportapplication.service.gov.uk/

rekado 16 hours ago 0 replies      
I'm happy to see that they are using GNU Guix: https://github.com/alphagov?q=guix
confounded 7 hours ago 0 replies      
There are very few positive comments here, but I think it's fantastic that this progress has been made (even if it's not perfect). I had no idea the sites could be used without JS at all; that's brilliant!
camus2 15 hours ago 0 replies      
Interesting, if you check out the tech it's mostly Java for the backend and Javascript for the front-end.
nepotism2018 16 hours ago 4 replies      
pyb 16 hours ago 3 replies      
This looks more like a manual. Where does it say that the infrastructure is open source ? I didn't see any source code.
Is the open office layout dead? dropbox.com
296 points by Antrikshy  2 days ago   357 comments top 4
rayiner 2 days ago 4 replies      
I've never understood the price issue. Class A in midtown Manhattan is $80 per square foot. That's $8,000 per year for a developer that probably makes well over six figures. Put two in an office that's only $4,000 per year.

I really don't buy it considering how much money companies spend on office space. Like why he hell would you have offices in Greenwich Village (http://www.businessinsider.com/facebook-new-york-office-tour...) instead of say FiDi. It's not only be much cheaper, but an easier commute to places peopled live, like Brooklyn.

nerdponx 1 day ago 7 replies      
No designated desks.

Please please please no. Having "personal" space at the office is just as important as having "quiet" space. I do not want to feel like a drifter, a student in the library, or a tenant in a co-working space, unless I am actually one of those things.

concede_pluto 2 days ago 4 replies      
It's been over a decade since I had so much as a semi-private office (shared with just one quiet teammate) so I have to say the death of the open office is exaggerated. And all of their workarounds assume a laptop could ever substitute for a desktop with an ergonomic keyboard and a large screen.

It's amazing that the escalating competition in pay and perks never seem to include space to get into the zone.

davb 2 days ago 15 replies      
> No designated desks

I hate hot desking. I know it's very subjective but when I come into an office for eight hours a day, five days a week, I like to personalise my workspace. I don't drive, I take public transport and walk, and hate carting my laptop and charger around with me. I want my own space, with meeting rooms and collaboration spaces for when I need those things.

I may be alone in saying so, but I'd happily take a modest drop in pay for a private office.

What it feels like to be in the zone as a programmer dopeboy.github.io
268 points by dopeboy  2 days ago   113 comments top 36
sktrdie 2 days ago 4 replies      
I get this too, but it's very draining - similar to doing an intensive workout, or given a talk at a conference.

The negatives are obvious; less sociable, more easily irritated, wanting to be by yourself. After you've spent a day in the zone, you're not really "party material".

The positive (apart from being very productive) is that I use it to get my negative feelings out of the way - anything that is bothering in my life is somehow gone when I am in "the zone" - it is truly a zen feeling as the author explains.

It's also important to mention that you can't force yourself to be in the zone. It comes and goes, with very little control on your behalf. People that try to force themselves in the zone by working harder, are not truly in the zone. It happens seamlessly without you even knowing or wanting it.

For instance, I'm hardly in the zone. It happens probably once every two weeks, if not less - it also depends on what I'm working on; if it's something new and exciting I'm more predisposed to get in the zone.

Being in the zone is like getting an adrenaline rush - you can force yourself to do it more often (go skydiving for instance), but if you do it too often you'll quickly drain out and not enjoy it as you used to.

tekromancr 2 days ago 3 replies      
I haven't been in the zone for months. It's mostly general dissatisfaction with my job, but it's gotten worse of late.

On an average day, there will be 4 hours of calls spread an hour or less apart for the first half of the day, with the potential for surprise calls for the rest of the day. The irony is that a lot of these calls are about why things aren't getting done.

The surprise calls are the worst. Even if I might have 2 hours of uninterupted time at the end of a day (when I am most tired and frustrated) it is impossible to get focused when there is always a looming threat of interruption. It's gotten so bad that I only get anything done late at night or over weekends, but then I am tired during weekdays and resentful that I had to throw away my freetime in order to move a project forward.

tastyface 2 days ago 3 replies      
Speaking of The Zone:

I often see programmers on HN talk about building mental castles of their programs, but I feel like I don't really code the same way. Instead, my thinking seems more "functional". For a given problem, I can often make out the faint outline of an optimal solution, but there's a lot of cruft and misplaced bits in the way. Most of my work involves mentally simulating the consequences of different options and then bending the architecture into such a shape that the whole thing just sort of assembles on its own. I'm only "in the zone" when I have to make that final leap. There's very little castle-building along the way.

As a result, I feel like I'm somewhat incapable of working on massive, multi-part architectures, since I just can't see the running state in my head. Once I zoom in to work on a single component, the rest fade from memory and I lose the big picture. On the other hand, I have no problem working in open-office environments: I don't mentally deal with a lot of program state, so I'm able to just dive right back in. This also influences my code to be more functional, as I know I can rely on e.g. idempotent methods to keep doing what they're supposed to regardless of any finicky global state.

I wish I could get better at building those "mental castles" since it's a huge barrier to designing complex architectures (like games). I don't want to be stuck forever working on the leaves of the tree. Might be related to OCD: I've had the disorder for a long time and I've sort of conditioned myself to avoid keeping running thoughts in memory before the "OCD daemon" distorts them into something horrible. As a result, much of my thinking is necessarily spontaneous and intuitive, or at the very least wordless.

Can anyone else relate?

rhizome31 2 days ago 9 replies      
I don't think I've ever experienced anything like this. As I practice TDD, my usual workflow is think -> test -> code, code usually being the easiest part. Things like "problems break down instantly", "everything becomes effortless" sound strange and exciting. I wonder how that relates to the concepts of maintainability, cowboy programming and 10x engineer.

I've met a few programmers in my career who were able to write a huge amount of code doing wonderful things without testing at all. One guy I think of would spend days coding without even trying to compile his code and apparently, except for minor typos he could quickly fix, his code was working when he decided to compile and test it. He impressed bosses and colleagues with amazing features developed in a very short time but, on the other hand, nobody on the team was able to maintain his code. This was explicitly stated and accepted by team members, we knew we couldn't maintain his code but we were ready to accept it given the productivity of the guy. It was a trade-off.

This way of working is completely alien to me. I can't think things in my head out of nothing and write working code. I need to start building something and get feedback from the computer to go to the next step. That's why when I was introduced to TDD it immediately made a lot of sense to me. It matched the way I was already operating. If I didn't have this workflow I think I would be unable to write even mildly complex code.

It's interesting how people can operate differently. In a way I'm a bit jealous of those "zone" programmers who can produce amazing things very quickly. But, on the other hand, I can see that I'm also useful because companies hire me and want to keep me. I've seen many times people taking over my code, maintain it and develop it further. I've even been explicitly told a few times that my code was very easy to understand and maintain. Seeing people taking over my code and develop it further is one of the most satisfying things in my work.

xaedes 2 days ago 1 reply      
The Dexterous Butcher

Cook Ting was cutting up an ox for Lord Wen-hui. As every touch of his hand, every heave of his shoulder, every move of his feet, every thrust of his knee zip! zoop! He slithered the knife along with a zing, and all was in perfect rhythm, as though he were performing the dance of the Mulberry Grove or keeping time to the Ching-shou music.

Ah, this is marvelous! said Lord Wen-hui. Imagine skill reaching such heights!

Cook Ting laid down his knife and replied, What I care about is the Way, which goes beyond skill. When I first began cutting up oxen, all I could see was the ox itself. After three years I no longer saw the whole ox. And now now I go at it by spirit and dont look with my eyes. Perception and understanding have come to a stop and spirit moves where it wants. I go along with the natural makeup, strike in the big hollows, guide the knife through the big openings, and following things as they are. So I never touch the smallest ligament or tendon, much less a main joint.

A good cook changes his knife once a year because he cuts. A mediocre cook changes his knife once a month because he hacks. Ive had this knife of mine for nineteen years and Ive cut up thousands of oxen with it, and yet the blade is as good as though it had just come from the grindstone. There are spaces between the joints, and the blade of the knife has really no thickness. If you insert what has no thickness into such spaces, then theres plenty of room more than enough for the blade to play about it. Thats why after nineteen years the blade of my knife is still as good as when it first came from the grindstone.

However, whenever I come to a complicated place, I size up the difficulties, tell myself to watch out and be careful, keep my eyes on what Im doing, work very slowly, and move the knife with the greatest subtlety, until flop! the whole thing comes apart like a clod of earth crumbling to the ground. I stand there holding the knife and look all around me, completely satisfied and reluctant to move on, and then I wipe off the knife and put it away.

Excellent! said Lord Wen-hui. I have heard the words of Cook Ting and learned how to care for life!

Translated by Burton Watson (Chuang Tzu: The Basic Writings, 1964)


QuercusMax 2 days ago 1 reply      
I find that doing TDD makes it much easier to get in the zone; more importantly, it helps to get back into the zone if I get off track.

I typically stub out a bunch of tests (just empty methods named based on what I plan to test), then go one by one and fill in the tests and write the implementations.

In the codebase I work in, we use a lot of mocks / fakes, so I typically write my tests "in reverse" - first the verification of results, then the mock / fake expectations for what methods should have been called. Then I'll write the actual implementation, and then fill in the mock inputs.

This way, if I get interrupted, it's very easy to transition back into what I was working on, as I make sure to always leave a breadcrumb trail for the next piece (when I run my test, the failure will give me a hint as to the next step to take). And since I have a bunch of stubbed out test methods, once one bit is finished, I can move onto the next one and repeat the process.

robotmay 2 days ago 0 replies      
I find that isolating myself from my surroundings can actually help encourage me get into the zone. I work from home, usually by myself, but if I want to get into the zone I'll chuck on a pair of headphones; something about that helps focus me, prevents distractions, and pulls me into the zone more easily. I suggested this to a friend recently whilst he was writing his dissertation, and he found that it worked well for him too and really helped him get through it.

However I can't have folk music, as I stumble across a great tune too often and get up to play it instead :)

It is much easier to stay in the zone when there's a physical barrier between you and other people. Even so much as being asked if you want a cup of tea is enough to pull you out of it. I recently asked my boss to stop other people from phoning me if they want me to be productive, and that really helped. I don't think most people understand what it's like; that amazing feeling you get when you're in the zone when programming, and it can be difficult getting them to understand why it's so frustrating being pulled out of it when a simple message would have sufficed.

stevewillows 2 days ago 0 replies      
Years ago I went through some neurotherapy with a local doctor [1]. One part of the process had clips on my ears and a sensor on my head to read biofeedback (or something along those lines).

The game was simple: there's a silo on the screen with a hot air balloon on the left side. When I get into 'the zone', the balloon goes up and around the silo. This will loop for as long as I can hold it.

It took about two sessions with minor success, then suddenly it clicked. Now I can easily enter that state on demand.

This might sound odd, but the neurotherapy helped eliminate a lot of the negative parts of ADHD without losing the edge that a lot of medications take away. I still have a lot of energy, but I can always sit and focus on the task at hand when I need to.

[1] http://www.swingleclinic.com/about/how-does-neurotherapy-wor...

EdgarVerona 2 days ago 2 replies      
Getting in the zone is something exhilarating for me - I experience euphoria, though I don't really notice the feeling until after it's over or if I get broken out of it.

It was actually something that as of late has started to disturb me: I notice that I live for moments like that, when I get in the zone and the whole world melts away. I feel like a junkie seeking a high, and thinking back on my youth and how destructive I was to my body and my interpersonal relationships in pursuit of "code all the time," I wonder whether that analogy is even more accurate than I'd like to believe.

I look back on my life and wonder if I've actually been a lifelong addict who is lucky enough to have a productive output of his addiction rather than a functioning member of society.

I don't know if others feel or have felt this way, or if it's just a phase that I'm going through. But these are my thoughts at the moment.

Excluse 2 days ago 0 replies      
Contrary to what a lot of people here have said about boring tasks, I find that's the easiest way for me to get into the zone.

While it may not be my most economically productive part of the day (aka I'm not working on the hard, important problems) there's no doubt that for the 10-15 minutes one of those menial tasks requires, I'm in that special state.

An environmental trigger for me is to play familiar music. It doesn't have to be a special playlist; any album I've listened to >50 times will suffice.

Remaining in the zone requires incremental progress (momentum) which I think is easy to find in a boring, repetitive task that's squarely in your wheelhouse.

The real productivity sweet spot is when you're able to get that momentum going on a valuable project.

EliRivers 2 days ago 2 replies      
Oh yeah, here comes the zone, here comes the zone, it's taken an hour of intense study of all this code but now I can see all the pieces at once inside my head and I can feel exactly how to thread this code right through the middle of all of it and -

COLLEAGUE LEANS OVER FROM NEARBY DESK IN OPEN OFFICE:"Hey buddy, what's the password for the - oh no, wait I remember."

Wait, what was I doing? What's all this code on my monitors? Why was I looking at any of this?

gggdvnkhmbgjvbn 2 days ago 0 replies      
Used to get this feeling from video games, now i get paid for it as a programmer. As a result I've found it hard to go back.
cpayne624 2 days ago 2 replies      
I'm a Fed engineer and spent a 4 mo. assignment on an integrated team with Pivotal pairing exclusively. It was a long 4 months for me. There was no "zone." I'm not built for pairing.
Uptrenda 2 days ago 1 reply      
Good luck ever getting in the zone if you do anything with modern blockchains (especially Ethereum.) All of the documentation is terrible and you waste hours trying to find a bug only to realise it was a problem with the library all along... Assuming of course: that you don't give up after seeing the "developer tools." What little tools you have for solving problems feels like you're trying to carve a delicate ice statue with a giant hammer while wearing clown gloves.

How do you deal with the related stress of having to struggle against needlessly difficult tools, libraries, documentation, and bugs caused by other people?

luckydude 2 days ago 0 replies      
Back when I worked at Sun, and got in the zone all the time, I worked on a ~32 hour daily clock. Because some of the work I was doing would take me about 8 hours to get back to the state of mind where I was yesterday. So instead of working 8 hours I would work for about 16, so I actually made 8 hours of forward progress. The 32 hour "day" was so I could have the rest of a normal day to eat, sleep, etc.

This got to be common enough that someone made a clock, where you could move the hands, it said "Larry will be in here" and stuck it on my door. I think it was sort of a joke but I think some people actually used it.

I couldn't come close to doing anything like that now. And at 55 years old, I can tell you that the days where you get in the zone, for me at least, are few and far between. I used to be able to just go there, now it sort of happens to me and I have to drop everything else and ride it before it fades away.

atom-morgan 2 days ago 0 replies      
What it feels like to be in the zone as a programmer is what it feels like to be in the zone doing any task that can put you into flow state.
engnr567 2 days ago 1 reply      
When I was single, for most of my big projects I used to get 70-80% of the work done in 4-5 days of being in the zone. And then spend months on changing the bells and whistles. Now I have to be home at a reasonable hour and hopefully in a good mood. So I have become hesitant to even get into the zone, because getting out of this state of high efficiency would make me extremely irritable.

How do married people or those with kids balance such bursts of creativity with personal commitments to their family ?

amelius 2 days ago 0 replies      
I notice that I can be in the zone while programming, but then when I need to research something (do real thinking rather than work by reflexes), I pop out of it.
dghughes 2 days ago 0 replies      
I'm trying my hand at programming and I'm surprised at my progress so far.

But as a person who is very unfocused and poor at math programming has got to be the worst thing on earth for me. But I like it, and math.

As with anything learning to focus takes effort it's different for each person. But a clean desk, calm environment, goals, lots of sleep, eat well and I find post exercise all helps. Not just learning to program but any task.

fnayr 2 days ago 2 replies      
This is almost disturbingly accurate to how I feel in the "zone" as well. A consequence of this is it's hard to have a healthy life as a self-employed programmer. If I want the app I'm working on finished faster (and I do or I'll run out of money), I must stay in the zone as long as possible. Which means I must ignore people as long as possible and put off eating/exercising as long as possible as well.
orthoganol 2 days ago 0 replies      
Prerequisites for 'the zone' (why not call it flow? isn't it the same thing?):

a) You have to be interested and eager to get started. If you're not happy with the project, if anything else going on in your life is taking your attention, you will not experience it.

b) When you experience it, you feel like you're a 'real' engineer, like that is your true identity now, your imposter syndrome disappears. So ultimately, if you don't identify as a programmer, as opposed to identifying as someone who programs because it pays well or view it as just a temporary phase of your career until you do management or become a startup CEO or something, you may never experience it.

c) After you experience it, your brain goes "whoaaa" and needs to recover. You won't be able to experience it for at least another 2-3 days, in my experience.

depressedpanda 2 days ago 0 replies      
What a great article; it concisely and succinctly describes what's going on, and does so much better than I could.

I shared it with my significant other, in order for her to better understand the grumpy responses she sometimes gets when asking seemingly innocuous questions like "would you like some tea?"

twodave 2 days ago 0 replies      
I disagree with the premise of this article (though I haven't always). I generally find that I'm always in the zone for _something_, and after more than a decade writing code I've found that often when I'm feeling less productive at it, it's because there is some deficiency in my life, be it social interaction, nutrition, fitness, over-exertion, etc. Over the years I've come to know myself better, which allows me to take better care of myself holistically in order to be not just more productive at work, but more content with life in general. Keep everything in it's proper place and all that.
d33 2 days ago 3 replies      
How does "being in the zone" compare to being in the state of "flow" [0]? Are those synonymous?

[0]: https://en.wikipedia.org/wiki/Flow_(psychology)

bcrisman 2 days ago 0 replies      
I get there as well, but it takes a bit. Generally, the zone hits me when I'm in crunch time and I know that I won't have any meetings for a while. My ideas all work together and if I get stuck on something, it's not long before I can figure it out. I can generally get a ton of work accomplished.

But then, someone knocks on my cube to say, "do you know where the elevators are at?"

djhworld 2 days ago 0 replies      
I sometimes find myself in this situation too. I often find that I feel most productive when I reach this state. But it's quite rare, a lot of my day is interrupted by colleagues, meetings, noisy office etc

It's cool, but you can see the downsides. A few weeks ago I basically disappeared for a few days writing some code. Great fun for me, but not exactly boosting growth opportunities for the team.

NicoJuicy 2 days ago 0 replies      
This happens a lot to me. Although i always go out friday and saturday evenings. It's just hard sometimes switching it off and takes a reasonable effort...

Sometimes i'm more quiet the entire evening and sometimes it's easier. In my mind, i'm constantly thinking about code then and it's hard to be social then.

All arround, i'm a very social guy. Just when i leave the zone, i'm not.

Tepix 2 days ago 0 replies      
Recommended related reading: Zenclavier - Extreme Keyboarding by Tom Christiansen


chrisfinne 2 days ago 0 replies      
Well articulated and very concise. This could have been laboriously drawn out into a 10 page article.

"Half as long" writing lesson from "A River Runs Through It"https://www.youtube.com/watch?v=7vRhOdf-6co

astrod 2 days ago 0 replies      
I started using guarana tablets to help stay 'super' focused, but only when required. I find it helps me a lot with productivity, often 3+ hours of optimum output. No side effects, only other supplements I take regularly are fish oil and I dont drink coffee or energy drinks.
tiku 2 days ago 0 replies      
I've done a minor in Flow, the theory about getting in the zone. Very interesting. It manifests itself mostly when the challenge is hard enough and your knowledge is also good on the subject. Boring tasks won't trigger flow etc.
subwayclub 2 days ago 0 replies      
I try to not stay in flow state. It means that the problem I'm working on is too familiar and I should automate the programming of it so that I'm grinding on something hard again.

Edit: but it's okay if it's a prototype

amiga-workbench 2 days ago 0 replies      
I haven't been getting this much for the last few months, but I think that's due to my scattered workload. I'm about to start a new project build and am looking forward to falling back into the flow.

Its a wonderful feeling, its like the fog in my mind has been lifted.

kolari 2 days ago 0 replies      
I guess when a programmer is in the zone, he/she is much more effective communicating/instructing the machines (in the language defined between humans and machines), than communicating with other humans (programmers or not).
macca321 2 days ago 0 replies      
Then the next day you realise how to achieve the same thing in a tenth of the code...
klarrimore 2 days ago 0 replies      
You mean when you sit down at your keyboard 45 minutes after you popped those Adderall?
Marko An isomorphic UI framework similar to Vue markojs.com
273 points by jsnathan  2 days ago   145 comments top 18
alansammarone 2 days ago 16 replies      
I've focused on backend for the last 7 years or so, so I've been kind of out of contact with the frontend world. Recently I started working on a personal project, and I thought it would be a good time to learn some of the modern tools people have been using for frontend dev.

I was completely baffled by the myriad of options out there, how complex they look (note I've been working on very high performance, distributed backend applications, so complexity on itself is not an issue), and how it's very unclear when to use any one of them or what each one is good for. I tried Angular and React, and both feel like almost a different language. You have to learn their internals to work effectively with them, and it often looks like they create more complexity then the original complexity they were trying to reduce. I have no problem learning new things, in fact, I love it! It just feels like there are other things to learn that will stick around for longer - JS frameworks/libraries seem to be very hype-driven these days. What are your thoughts on this?

bryanph_ 2 days ago 4 replies      
Nowadays I only consider switching front-end frameworks if there is a substantial conceptual improvement. React did this for me due to its uni-directional dataflow and component-based architecture. There is nothing new here conceptually.
tangue 2 days ago 4 replies      
I've discovered Marko in one of the various react-alternative topic that emerged yesterday and it looks like something sane, which is rare in the js ecosystem. I'm wondering if anyone on hn used in in a real world project and how it was.
jcelerier 2 days ago 5 replies      
Meanwhile in real reactive environments:

 import QtQuick 2.7 import QtQuick.Controls 1.1 Rectangle { property real count: 0 Column { Text { text: count color: "#09c" font.pointSize: 24 } Button { text: "Click me!" onClicked: count++ } } }
Also the so-called "60fps smooth" animation has noticeable stutters on Firefox on linux.

jarym 2 days ago 0 replies      
So many new UI frameworks, yet no one really mentioned SmartClient.com (LGPL licensed).

I've been using it for almost 10 years and some of the concepts they pioneered have only recently been discovered by the new kids.

I still use it, though some of the 'fixes' they had to put in place to support old browsers are often polluting the DOM unnecessarily in modern browsers (this is something I hope they fixed).

My favourite aspects of it are that I can declare components declaratively, it has a technique called autoChildren that allows managing a tree of components as a flat set (useful for complex components like tabsets), and the data binding layer. The documentation is top notch (which it needs to be given the depth of stuff in there).

Again, all of this was around in 2009 since when I started using it - and not sure how many years before I found it they'd been going.

znpy 1 day ago 1 reply      
I just skimmed the page and seen that sort coloured of sine wave... Then read The above animation is 128 <div> tags. No SVG, no CSS transitions/animations. It's all powered by Marko which does a full re-render every frame..

Well, as soon as my browser renders that thing, the browser process reaches 122% cpu usage (according to htop). And i'm using a 4th gen core i7 processor. I can literally (literally in the literal sense of the word) hear my fan spin up. That hurts battery so much.

brianon99 2 days ago 0 replies      
Dont abuse the term isomorphics.To prove 2 groups are isomorphic mathematically you have to show there exists a product preserving map between the groups.

Just kidding.

forkLding 1 day ago 1 reply      
Quick question, is there anything conceptually interesting about Marko thats different in ideology and structure from Angular and React or Vue?

For background to why I'm asking: I'm an IOS mobile dev and was a web dev before and I often use web dev structures and ideas as there is less structure, frameworks (unless you count RxSwift) and general philosophies I find in IOS mobile dev aside from best practices and tips like avoid Massive View controllers, etc.

spacetexas 2 days ago 1 reply      
Ecosystem is so important these days, there might be technical reasons for choosing this but considering the support (knowing stack overflow answers will be available) and pre-existing component ecosystems for Vue & React, I can't see a reason anyone would pick this.
pier25 2 days ago 1 reply      
Here's a great introduction to Marko by its main dev:


seangates 1 day ago 0 replies      
For those interested in spreading this on PH: https://www.producthunt.com/posts/marko-js
SeriousM 1 day ago 1 reply      
I don't believe the marketing when it says that working with something is "fun". Working is maybe enjoyable sometimes, but it will stay hard work if you're doing it right.
jcranberry 1 day ago 0 replies      
Isomorphic UI? WTF??

What's next, homotopic web frameworks and commutative app diagrams???

dolphone 1 day ago 0 replies      
You had me at isomorphic.
stuaxo 2 days ago 0 replies      
This looks pretty decent.
akras14 2 days ago 0 replies      
Yay, another front end framework! /s
dmitriid 2 days ago 1 reply      
Oh hi there, yet another awkward not-really-html-not-really-js templating language

 class { onCreate() { this.state = { count:0 }; } increment() { this.state.count++; } } <div>The current count is ${state.count}</div> <button on-click('increment')>Click me!</button>

Sonos: users must accept new privacy policy or devices may cease to function zdnet.com
204 points by ralphm  13 hours ago   203 comments top 29
cm2187 5 hours ago 5 replies      
To be honest I had a pretty average experience with Sonos so far. It is connected with cat6 ethernet, professional switches, with no other connectivity problems on that network (and I tested all cables). I have 3 systems (pay1, play 5 and the sonos amp), and they keep losing track of each others, I have to regroup them regularly. They also struggle with long music tracks (i.e. 1h podcasts off a synology shared drive) and often stall in the middle.

If they brick my devices, I will only be half upset.

justinjlynn 12 hours ago 9 replies      
It seems like nobody actually owns anything any more - that we're all just digital serfs living on someone else's land. I really don't know why anyone would willingly make such a deal.
solomatov 50 minutes ago 1 reply      
This problem should be solved in a legislative way, similar to Europe's GDPR. There should some minimum privacy rights which can't be opted out of and which are protected by government. That's the only viable solution. Markets don't help here.
sverige 46 minutes ago 0 replies      
I hate all smart devices. The TV should just be a TV, the dishwasher should wash dishes, the refrigerator should keep stuff cold, the washing machine and dryer should clean my clothes, and speakers should just produce sounds. I have yet to hear any compelling reason to make these devices dependent on software.
MikeGale 2 hours ago 2 replies      
I suspect that legislators are so far behind the curve on this, that they'll never protect decent humans from such guys.

Answer: Forget the protection afforded by the state. Protect yourself. Blacklist the scum manufacturers, warn your acquaintances.

What other suggestions.

pantsofhonking 5 hours ago 1 reply      
Wow talk about blowing something out of proportion. The new software comes with new terms. If you don't accept the new terms you keep the existing software. Over time, it is possible that current software sill stop working with e.g. some future Pandora API, and you'll have a choice of either updating your software or foregoing that feature.

I have a Sonos in every room of my house and I've owned them since the very first generation. Sonos has been extremely good about updating the software. The current software still works on the very first hardware, with all the functionality save for a single feature, room-correcting equalization, that requires the newer DSP. This company is the gold standard of ongoing software support for consumer goods and this article is trying to spin the situation in just the perfect way to make the Internet commentariat explode.

iomotoko 17 minutes ago 1 reply      
mhm, please excuse me all if this is wrong, but isn't this the exact same way it works with pretty much all of the updates from e.g. Apple and co?

Let's say a new Itunes update comes along, this requires the user to opt into a privacy policy, if there happens to have been a change in said policy since the last update, then accepting the new conditions is required in order to install the update? Same for an update in browsers, iOS, Android, ...

I am not in favour, just confused as to why this specific case is singled out? Especially since not updating critical software (Operating systems, browsers, et cetera) seems to have far more serious consequences than w/ a speaker?

bogomipz 51 minutes ago 1 reply      
I am sincerely curious and maybe a Sonos owner can offer some feedback. what does Sonos offer me now that a streaming music provider, a smart phone and a portable bluetooth speaker doesn't?

I understand that Sonos can stream to multiple "zones" simultaneously but besides the occasion of a house party how often is this necessary?

This news to me is just another reason for me to never buy one.

lamecicle 12 hours ago 2 replies      
I remember a time when the phrase "if you don't know what the product is, you're the product" made sense.

Now it's, "if you don't kn... oh f*ck it, you're the product!"

CaptSpify 4 hours ago 0 replies      
I looked at these speakers a few months ago. They look really cool. As soon as I saw that they require phoning home, I said "lolno" and built my own speaker system with RPIs.

I love the idea of smart devices, but only as long as the software is Free and Open. I really don't understand people who think situations like this are acceptable.

eveningcoffee 5 hours ago 1 reply      
There should be way to fight it back. If this kind of thinking spreads even more it will suffocate our society.
jeffehobbs 11 hours ago 1 reply      
I'll be honest, I'm just glad to see they are still actually working on their software.
pedrocr 4 hours ago 4 replies      
Anyone have another suggestion for a pair of wifi speakers that can be assigned to Left/Right to get stereo and can be network streamed to and have another device on the network with a line-in?

I was about to buy 2 Play1's and a Connect to do a stereo install in a room where I don't want to run speaker cable. I had researched wackier home-built solutions and was going to give up and go for the Sonos. Now I'm once again considering wackier home-built stuff like 3 raspberry pi's attached to line-in and two dumb powered speakers.

mnw21cam 11 hours ago 1 reply      
Sale of goods act (and similar consumer protection laws in so many countries around the world)?
swiley 10 hours ago 1 reply      
If you can't read the source and build the firmware yourself you don't own the device.

It's that simple. Stop putting up with closed non trivial firmware and these sorts of problems go away.

voidz 12 hours ago 4 replies      
The actual solution is simple: stop using these devices.
allwein 11 hours ago 1 reply      
I might understand this if it was for new customers going forward. But I don't understand how they can tell their existing customers this and not expect a lawsuit.
hkmurakami 3 hours ago 0 replies      
This is why I will never want my home to be "smart".

You'll have to pry my physical wall switches and copper wires from the cold, dead hands.

softwaredoug 11 hours ago 5 replies      
Is there a use case for Sonos that good Bluetooth speakers don't address more simply?
avs733 10 hours ago 0 replies      
What is provided to the consumer/user in exchange for agreeing to this contract? I assume, because I am becoming increasingly nihilistic about technology, that challenges to this would fail in court. However, this seems to fail both the consideration and the competency and capacity elements of a functional contract.
JimRoepcke 5 hours ago 0 replies      
Sonos: I am altering the deal. Pray I don't alter it any further.
INTPenis 12 hours ago 1 reply      
I bought one of those for my gf because I wanted to see if it was any good.

Quick review.

iOS users have to use their spotify app which is lousy.

Google Play users can cast to it, thankfully.

Major positive point is that it uses wifi and supports casting from Google Play. But for my gf who uses iphone she hates it.

Overall we prefer the Marshall bluetooth speakers over Sonos because in an apartment there's rarely a need for wifi casting music.

Edit: Chromecast audio is also a viable alternative. Based on how well my regular Chromecast (video) works for me I assume the audio one is as good.

pm24601 9 hours ago 2 replies      
And reason #458 why I am skipping the whole "IoT revolution" in my home.

I consider this motivation for DIY.

circa 2 hours ago 0 replies      
For some reason I read this as Sophos. That would not be good.
DarkKomunalec 11 hours ago 1 reply      
RMS was right again.
exabrial 1 hour ago 0 replies      
Lawsuit time
thrillgore 5 hours ago 1 reply      
I've long considered going to Sonos, but I think i'll stay with my Plex VM and my NAS
natch 11 hours ago 0 replies      
Maybe there is a Google acquisition looming and this is being dictated by Google. Speculation obviously, but look what happened with Nest.
throwaway2016a 9 hours ago 2 replies      
As a Sonos (I have close to $2000 worth of products) user this actually doesn't bother me.

To all the people talking about ownership. I find it hard to believe the aux in will cease to work. So worst case is they turn into regular speakers.

What it sounds like is you won't be able to update your firmware. So more likely than not, everything would keep working but random Internet related services (like Spotify Integration) may break over time because, for instance, if Spotify changes their API you won't get the software update to fix it.

And that is why I think it is OK. Software updates over the Internet are always subject to licensing. That is not new and not unique to Sonos.

The here is key cheney.net
237 points by davecheney  1 day ago   53 comments top 9
kps 1 day ago 2 replies      
Not mentioned in the article is that ENQ is a standard ASCII character (0x05, and previously abbreviated WRU for Who are you?) that causes the device on the other end of the line to send back its answerback message.

On electromechanical teletypes the answerback message was programmed by breaking off tabs from a rotating drum, like an inverse music box.


wodenokoto 1 day ago 11 replies      
It is kinda funny that Apple didn't do anything about Caps Lock when they redesigned their keyboard with the touch bar.

I cannot imagine professionals or casual users who would need quick access to turning caps lock on and off. When you need caps lock over shift, it is because you are planning to write a lot of all cap text, and so, taking a second to turn it on via the touch bar seems okay. It is prime to be relegated to the touch-bar, while plenty of professionals use ESC all the time while touch-typing.

While they were at it, the switch window `command+~` short cut is almost unreachable on non-us like keyboard layouts.

If they were gonna break professional users keyboard workflow, why not fix some of the more glaring mistakes in current keyboard layout while they were at it?

scott_o 1 day ago 1 reply      
It sounds like this was an automatic thing, the host could query the terminal and would get a response without any user input.

So that still leaves me with the question of why the key exists?

What use cases would you have for voluntarily sending the host your "identification"? Was this used for authentication?

samlittlewood 1 day ago 1 reply      
I seem to remember that being used as an attack vector at college: Identify a terminal that was logged on as root (albeit physically inaccessible), then find a way of getting message to it, then send a string that programmed answerback and then triggered it. Usual payload was moving your 'special' version of a common suid program into place, possibly along with a tweaked version of 'sum'.

This started with 'write' etc. but became an escalating arms-race.

binarycrusader 1 day ago 0 replies      
A colleague had this to say:

I remember a spate of answerback hacks with vt100s. the remote host could program the message by sending an escape sequence, and then get the vt100 to type the string back. you could make the tty execute commands that would give the attacker privs, and stuff like that. The main fix was hardening mail clients to filter escape sequences; simpler days to be sure, but the basic flaw (non-filtered text) still occurs in html forms

kazinator 1 day ago 2 replies      
I noticed there are some commonalities between this keyboard and the Japanese layout on PC keyboards.

For instance, note the co-location of * and : characters on the same key. It's not in the same place on the Japanese layout, but the co-location is the same.

Another shared feature between the two is the co-location of the = and - (equals and dash).

Next, the tilde in the general same area on the Japanese layout as on this terminal, close to the Return key.

Lastly, the correlation between the numeric row keys and their Shift glyphs is almost the same on the Japanse layout and this terminal!

 1 2 3 4 5 6 7 8 9 0 JPN-PC: ! " # $ % & ' ( ) <blank> __ same! ADM-DA: ! " # $ % & ' ( ) <blank> US-101: ! @ # $ % ^ & * ( )
There may be other similarities; this is just what I noticed at a glance.

gpvos 1 day ago 0 replies      
Many VTxx emulators still have this answerback function. Look in the PuTTY configuration under Terminal, "Answerback to ^E" (^E = ENQ).
ChuckMcM 1 day ago 0 replies      
And, if you were sneaky you could find out where someone was chatting to by sending the answer back see and if you didn't like the computer center system programmer you could send a wall(1) with it and crash the Gandalf terminal server.
work_account 1 day ago 4 replies      
But what does the RUB key do?
Rolling Your Own Blockchain in Haskell michaelburge.us
231 points by nicolast  2 days ago   32 comments top 7
lambdaxdotx 2 days ago 0 replies      
Here is another "minimum viable" blockchain implementation in Haskell: https://github.com/adjoint-io/nanochain.

It's a bit simpler in implementation; the relevant data structures are defined a bit differently, so it could give a nice alternate perspective about what a blockchain written in Haskell may look like.

alphaalpha101 2 days ago 4 replies      
I like the idea behind this article, but this isn't the way to do it. It hides the simplicity of the blockchain concept behind arcane syntax and overly complicated higher-order typing constructs.

 newtype BlockF a = Block (V.Vector a) deriving (Eq, Show, Foldable, Traversable, Functor, Monoid) type Block = BlockF Transaction type Blockchain = Cofree MerkleF Block
Does anyone really think this is how one should write software? I think that constructions like Cofree are interesting, but I don't think they're programming.

umen 2 days ago 1 reply      
Great stuff , where can I learn about blackchain in begginers level with code examples ?
ngcc_hk 2 days ago 1 reply      
any other language? Python and lisp would be interesting.
macsj200 2 days ago 0 replies      
Initially read submission site as malbolge.us
cyphar 2 days ago 2 replies      
It's quite a cute idea, implementing a blockchain (a fundamentally impure concept) in Haskell. If you're interested in other cool Haskell projects, check out JoeyH[1].

[1]: http://joeyh.name/code/

Studying how Firefox can collect additional data in a privacy-preserving way groups.google.com
252 points by GrayShade  14 hours ago   390 comments top 12
kannanvijayan 12 hours ago 18 replies      
I can do a quick summary of what's being proposed and why. I work in the JS team at Mozilla and deal directly with the problems caused by insufficient data. Please note that I'm speaking for myself here, and not on behalf of Mozilla as a whole.

Tracking down regressions, crashes, and perf issues without good telemetry about how often it's happening and in what context. Issues that might have otherwise taken a few days to resolve with good info, become multi-week efforts at reproduction-of-the-issue with little information.

It simply boils down to the fact that we can't build a better browser without good information on how it's behaving in the wild.

That's the pain point anyway. Mozilla's general mission, however, makes it very difficult to collect detailed data - user privacy is paramount. So we have two major issues that conflict: the need to get better information about how the product is serving users, and the need for users to be secure in their browsing habits.

We also know from history that benevolent intent is not that significant. Organizations change, and intents change, and data that's collected now with good intent can be used with bad intent in the future. So we need to be careful about whatever compromise we choose, to ensure that a change of intent in the future doesn't compromise our original guarantees to the user.

This is a proposed compromise that is being floated. Don't collect URLs, but only top-level+1 domains (e.g. images.google.com), and associate information with that. That lets us know broadly what sites we are seeing problems on, hopefully without compromising the user's privacy too much. Also, the information associated with the site is performance data: the time spent by the longest garbage-collection, paint janks.

This is a difficult compromise to make, which is why I assume it took so long for Mozilla to come around to proposing this. These public outreaches are almost always the last stage of a length internal discussion on whether proposals fit within our mission or not.

I'm not directly involved in this proposal, but I personally think it's necessary, and strikes a reasonable balance between the privacy-for-users and actionable-information-for-developers requirements.

Vinnl 13 hours ago 6 replies      
Note: "planning" means "reaching out for feedback about".

Also interesting: the method they plan on using for anonymising this: https://en.wikipedia.org/wiki/Differential_privacy#Principle...

If that is not sufficiently anonymous, then please submit the reasoning why to Mozilla.

frankmcsherry 9 hours ago 0 replies      
As someone familiar with differential privacy, and (somewhat less) with privacy generally, here are some suggestions for Mozilla:

1. Run an opt-out SHIELD study to answer the question: "how many people can find an 'opt-out' button?". That's all. You launch this at people with as much notice as you would plan on doing for RAPPOR, and see if you get a 100% response rate. If you do not, then 100% - whatever you get are going to be collateral damage should you launch DP as opt-out, and you need to own up to saying "well !@#$ them".

2. Implement RAPPOR and then do it OPT-IN. Run three levels of telemetry: (i) default: none, (ii) opt-in: RAPPOR, (iii) opt-in: full reports. Make people want to contribute, rather than trying to yank what they (quite clearly) feel is theirs to keep. Explain how their contribution helps, and that opting-in could be a great non-financial way to contribute. If you give a shit about privacy, work the carrot rather than the stick.

3. Name some technical experts you have consulted. Like, on anything about DP. The tweet stream your intern sent out had several historical and technical errors, and it would scare the shit out of me if they were the one doing this.

4. Name the lifetime epsilon you are considering. If it is 0.1, put in plain language that failing to opt out could disadvantage anyone by 10% on any future transaction in their life.

I think the better experiment that is going on here is the trial run of "we would like to take advantage of privacy tech, but we don't know how". I think there are a lot of people who might like to help you on that (not me), and I hope you have learned about how to do it better.

embik 13 hours ago 3 replies      
This is ridiculous. I use and recommend Firefox for pure ideological reasons, because frankly, Chrome/Chromium is miles ahead of them.

If they start opt-out tracking using the same approach as Google I do not see any reason to use it nor install it for my friends and family. That's some data for you, Mozilla.

huhtenberg 13 hours ago 3 replies      
The single largest advantage of Firefox over other browsers is that despite all odds and occasional missteps they managed to respect users' desire for complete privacy.

 For Firefox we want to better understand how people use our product to improve their experience. 
Sure thing. But the fact that they are unhappy that some (many?) people are opting-out from the data collection is merely a sign that they don't want to understand why people are using Firefox in the first place. By opting out from the data collection people effectively tell them over and over again that they don't want for Mozilla "to understand how they use Firefox" or "to improve their experience", not at the expense of their privacy.

No phoning home. No telemetry, no data collection. No "light" version of the same, no "privacy-respecting" what-have-you. No means No. Nada. Zilch. Try and shovel any of that down people's throats and the idea of Firefox as a user's browser will die.

kogepathic 13 hours ago 3 replies      
> What we plan to do now is run an opt-out SHIELD study [6] to validate our implementation of RAPPOR.

IMHO, this is a bad idea. Many people I know already use Firefox because they're weary to give Google (Chrome) all their data.

Firefox should make this feature opt-in only.

cJ0th 13 hours ago 0 replies      
While I do understand the allure of collecting this kind of data I find it highly disturbing to see this from Mozilla.

I think not having perfect information about the users is a trade off that should be made in order stay an alternative to most other browsers. There are still ways to get more data by other means, though. When it comes to most visited websites, for instance, the alexa ranking should give a good, if not perfect, idea.

stutonk 11 hours ago 0 replies      
Just want to add a little volume to the general opinion here that collecting user data, no matter how anonymous, is a terrible idea for a product whose only appealing quality is that it respects its users privacy.

Data is both highly alluring and addictive as evinced here by Mozilla potentially willing to shoot itself in the foot to get some. What's to keep this from becoming a frog in a boiling water kind of situation? How can I trust that Mozilla is going to adhere to their own stated standards? The easiest answer is that I won't have to because I can just use something else. Personally, the only reason I use Firefox is because it's slightly less convenient to set up a secruity-patched version of Chromium.

Other people in this thread have made the excellent points of the fact that not enough people opting in to data collection is in itself a critical piece of data. Moreover, things such as "Which top sites are users visiting?" can be answered by looking at data from page ranking services and then they can go to those sites on their own testing equipment to answer their other questions. A little investment in acquiring this data by not spying and maybe getting a wider array of testing equipment is probably less costly than the potential for loss in market share that they're already struggling to hold.

bugmen0t 5 hours ago 0 replies      
The linked paper to RAPPOR is really, really noteworthy here.

In essence, Firefox will ask itself whether it visited website X and flip a coin and if it's heads, it will lie to the server and send a random boolean. If it's tail, it will not. This way there is no way for anyone (including Mozilla) to know whether you actually visited the website. But the statistics will work out such that the collective data from everyone will give a good representation of all users.I find this a neat technology to collect data in a privacy-preserving way. And there's an opt-out (opt-in won't work because it creates bias and provides messy results).

I really, honestly don't understand why people are so upset.

dagenleg 12 hours ago 1 reply      
In the end Mozilla is simply going to go through with it and there's nothing we can do about it. Just like with the killing of the XUL plugins - the company simply didn't care about the outcry. I mean why would they? The amount of people that cares about stuff like 'customization' or 'privacy' is slim.

So we will toothlessly complain but then the changes will be shoved in our throats, because obviously why would one care what the non-targeted demographics whines about. And of course it will be framed as being 'for our own good' and half of the people complaining with just deal with it, just like the majority already does.

dhimes 12 hours ago 2 replies      
I generally trust Mozilla, but I really don't understand what they are going to get out of the data. Their explanation leaves me scratching my head. Perhaps it's simply because I don't work on browsers?

How does seeing which sites users use that need Flash drive their decision-making. Either they support Flash, or they don't.

And- ditto for "Jank" (not sure I understand that term, frankly- why is it capitalized?). Some developers don't optimize well- how is Mozilla going to use this? I think they do a good job over on MDN.

I guess I'd like to be sure I understand what problem they are trying to solve. Maybe they feel like without understanding their users they can't keep up with Chrome. I see people talking about how good Chrome is. And I must admit- it is sweet for me too. But that may be because (1) I don't have it loaded up with add-ons like I do Mozilla and (2) they have optimized for certain sites like youtube and gmail and I just can't get Firefox to work all that well on those sites.

But I'm not convinced that they need my data to fix that.

EDIT: On the other hand, Chrome seems to lose my passwords on every upgrade so it won't be my main browser until if fixes that little issue, which is going on, what, 5 years now?

damnfine 13 hours ago 2 replies      
I say it over and over. You can not completely anonymize data with any reliability. Please note the qualifier, many systems work for many vectors, but any sufficiently large dataset can be used to graph habits and correlate them. Maybe there is a safe way, but I put the onus of proving it on the person implementing it.
Going Multi-Cloud with AWS and GCP: Lessons Learned at Scale metamarkets.com
219 points by jbyers  1 day ago   53 comments top 13
nodesocket 1 day ago 4 replies      
One of the biggest benefits of Google Cloud is networking. By default GCE instances in VPC's can communicate with all instances across zones and regions. This is a huge plus.

On AWS, multi region involves setting up VPN and NAT instances. Not rocket science, but wasted brain cycles.

Generally, with GCP setting up clusters that span three regions should provide ample high availability and most users don't need to deal with the multi cloud headaches. KISS. You can even get pretty good latency between regions if you setup North Carolina, South Carolina, and Iowa. Soon West Coast clusters will be possible between Oregon and Los Angels (region coming soon).

ad_hominem 21 hours ago 2 replies      
If any Google Cloud people are listening I wish you had an equivalent to AWS's Certificate Manager. Provisioning a TLS certificate which automatically renews for eternity (no out-of-band Let's Encrypt renewal process needed) and attaching it to a load balancer is so nice compared to Google Cloud's manual SslCertificate resource creation flow[1].

To a lesser extent, it's also nice registering domains within AWS and setting them to auto renew. Since Google Domains already exists, it would be neat to have this feature right inside Google Cloud.

[1]: https://cloud.google.com/compute/docs/load-balancing/http/ss...

manigandham 20 hours ago 0 replies      
When it comes to GCP:

- They have Role Based Support plans which offer flat prices per subscribed user which is a much better model. [1]

- Live migration for VMs mean host maintenance and failures are a minor issue, even if all your apps are running on the same machine. It's pretty much magical and when combined with persistent disks, effectively gives you a very reliable "machine" in the cloud. [2]

1. https://cloud.google.com/support/role-based/

2. https://cloud.google.com/compute/docs/instances/live-migrati...

vira28 1 day ago 2 replies      
One thing that I liked with GCP is their recommendation for cost saving. I spun up a compute engine for a hobby project and within minutes they gave recommendations to reduce the instance size and how much i can save. I don't think AWS offers something like that. Correct me if I am wrong.
azurezyq 1 day ago 0 replies      
One extra point for tracking VM bills:

GCE bills are aggregated across instances. To get more detailed breakdown, you can apply labels to them and the bills will have label information attached in BQ.

Alternatively, you can leverage GCE usage exports here:


Which has per-instance per-day per-item usage data for GCE.

Disclosure: I work for Google Cloud but not on GCE.

user5994461 1 day ago 2 replies      
>>> on AWS you have the option of getting dedicated machines which you can use to guarantee no two machines of yours run on the same underlying motherboard, or you can just use the largest instance type of its class (ex: r3.8xlarge) to probably have a whole motherboard to yourself.

Not at all. Major mistake here.

When you buy a dedicated instances on AWS, you reserve an entire server for yourself. All the VMs you buy subsequently will go to that same physical machine.

In effect, your VMs are on the same motherboard and will all die together if the hardware experiences a failure. It's the exact opposite of what you wanted to do!

dswalter 1 day ago 2 replies      
If AWS were to go to a per-minute billing cycle, they would be instantly more price-competitive with Google's offering. Or, to put it the other way around, those leftover minutes form a significant chunk of AWS's profit margin.
matt_wulfeck 1 day ago 0 replies      
> As we investigated growth strategies outside of a single AZ, we realized a lot of the infrastructure changes we needed to make to accommodate multiple availability zones were the same changes we would need to make to accommodate multiple clouds.

Maybe he author means multiple regions? Multi az is so easy. Everything works. Multi region is much harder.

whatsmyhandle 1 day ago 2 replies      
Very nice writeup! A nice, detailed read that was easy to understand.

It seems to focus more on raw infrastructure (EC2 vs GCE) instead of each company's PaaS offerings. Obviously AWS has the front runner lead here, but would be super curious in a comparison of RDS vs. Cloud Spanner for instance.(pun unintentional, but then realized, and left in there)

swozey 1 day ago 0 replies      
Great thorough comparison and falls very into line with my experience. Definitely worth the read. Thanks!
throwaway0071 1 day ago 0 replies      
Off Topic: it's frustrating that these companies spend quite a lot of time and money learning about the complexities of their infrastructure but when you're interviewing at such companies, you're expected to have answers for everything and a complete strategy for the cloud.


hobolord 3 hours ago 0 replies      
Great post! How difficult is it to switch from an AWS EC2 instance to the GCP version?
mrg3_2013 1 day ago 0 replies      
Nice post! I will be using it as a reference.
IOCCC Flight Simulator aerojockey.com
248 points by xparadigm  2 days ago   29 comments top 14
userbinator 2 days ago 0 replies      
I am reminded of this 4 kilobyte demo: https://www.youtube.com/watch?v=jB0vBmiTr6o

(Source code and discussion at https://news.ycombinator.com/item?id=11848097 ; explanation at http://www.iquilezles.org/www/material/function2009/function... )

although the flight sim is a little more interesting from a technical perspective, since it's interactive and also not using the GPU to do most of the computation.

Also, despite the source code being obfuscated, observe that that "external symbols" which are still visible, e.g. XDrawLine, XSetForeground, etc. already give a pretty good overview of how it does what it does --- and in general, when reverse-engineering or analysing software, inspecting where the "black box" interacts with the rest of the world is an important part of understanding it.

cdevs 2 days ago 0 replies      
And yet it looks like my coworkers everyday php code. Seriously though way to go above and beyond in the competition with something so interesting on its own as a x windows flight sim with easy to modify scenery.
Grustaf 2 days ago 1 reply      
Wow thats really impressive. I just wrote a flight sim the other day that i thought was frugal because it's less than 10 kloc. This on is about 100 times shorter!
iso-8859-1 1 day ago 0 replies      
I have modified the Linux SABRE flight sim to compile and run on modern Linux systems. Only requires DirectFB (replaced SVGALIB) or SDL2. Build with scons.


Images and review at:


senatorobama 1 day ago 1 reply      
I have been low-level systems coder for about 10 years, and have no clue how demos are made besides the fact they use the technique of procedural generation and/or shader(s).

Is there a guide on how to start making one?

549362-30499 2 days ago 1 reply      
Pretty cool! I don't know why I expected otherwise, but the makefile works just fine despite being from 1998. It took 30 seconds to download the files, compile, and start playing the game!
throwaway7645 2 days ago 2 replies      
I love using small amounts of code to write beautiful code...preferably games. I wish there was a book with 20 programs like this, just not obfuscated. I bet I would learn a lot.
shakna 2 days ago 0 replies      
Downloaded, ran "make banks"... And played. [0]Works like a charm!

And despite the seriously obfuscated nature, I only got three warnings on compilation.

Banks compiled down to 19kb (though dynamically linked), which is still fairly tiny (though much larger than the source code).

Now excuse me, I'm going to have some old school fun.

[0] https://imgur.com/RvEM5q2

Sir_Cmpwn 2 days ago 1 reply      
I've been wondering when IOCCC 2017 is going to happen. Does anyone have the word?
ipunchghosts 2 days ago 0 replies      
A writeup or video explaining how the code works would be fascinating.
mschuster91 2 days ago 0 replies      
Is there any explanation on how this thing actually works?
_benj 2 days ago 0 replies      
This might go beyond obfuscated to the art realm!
foota 2 days ago 0 replies      
Thank God for orthogonal matrices
org3432 2 days ago 4 replies      
The IOCCC is fun, but isn't it time for C/C++ to get standard formatting for consistent readability? Like Go and Python.
       cached 23 August 2017 02:11:01 GMT