hacker news with inline top comments    .. more ..    24 Aug 2017 Best
home   ask   best   6h 31m ago   
1
Android Oreo android.com
771 points by axg  2 days ago   630 comments top 5
1
dcomp 2 days ago 7 replies      
The most interesting part is the way they are planning on tackling fragmentation in O onwards with Project Treble [0]

If your device ships with O it should be running an immutable semantically versioned HAL. In essence you should be able to be able to flash AOSP on every new device. No matter what the vendor does.

Edit: I can see it now, in the technical specs of each device you will see a list of HAL Versions. The newer your HAL the longer you can expect support from AOSP if not your vendor.

[0] http://androidbackstage.blogspot.co.uk/2017/08/episode-75-pr...

2
klondike_ 2 days ago 9 replies      
Project Treble is the most important thing in this release

>The biggest change to the foundations of Android to date: a modular architecture that makes it easier and faster for hardware makers to deliver Android updates.

With any luck, this will end the huge security/update problem Android has. Right now an update is dependent on the chip manufacturer's drivers, then the OEM adding them to the ROM with their custom "improvements", and finally the carrier pushing the update to devices. Right now it just takes one break in the link and a device goes without updates, which is a security disaster. If Google can push updates from the Play Store (presumably the end goal of Treble), none of this will be a problem.

3
rdsubhas 2 days ago 8 replies      
Not saying that everything else is bad, but one thing that strikes me is how much they have run out of interesting things now that they had to use fillers[1] like:

Tooltips

Support for tooltips (small popup windows with descriptive text) for views and menu items.

Normally, this would be relegated to a git changelog in the support library. But this is on the global marketing landing page.

I like to imagine a fictional internal mail thread going like this:

> Folks! please, give us something, anything, to put on the landing page!

> Someone replies duh, maybe tooltips

> What's a tooltip?

> uhh, small popup windows with descriptive text

> What's a popup window?

> uhh...

> Nevermind, its on!

Obligatory /s and yeah its Google, but seriously I can't imagine any other circumstances on how this specific copy, which tries to explain what a "tooltip" is by using the words "popup window", "view" and "menu item", came up.

This could be a good sign though, of the maturity of the platform (and harder to feel left out if you didn't upgrade).

1: https://www.android.com/versions/oreo-8-0/

4
dcow 2 days ago 6 replies      
Am I the only one who's really disappointed by the platform's shift in its stance on background execution? I was originally drawn to Android because it wasn't iOS. I wanted to develop on a platform where I could run a service in the background if the user wanted that. Apps that were bad stewards of battery life and phone resources were supposed to be scrutinized by users and removed if they were too poor. You can be a good steward, it's just harder especially when your monolithic app is an uninformed port of some legacy iOS code.

By issuing a hard restriction on background usage Google has brilliantly improved battery life for the masses while condonig the same lazy architectural patterns of the past, locked people into Firebase Cloud Messaging--a Google service not part of the AOSP, and potentially stunted Android adoption in domains outside of mobile. It's the turning of an era for Android, and my interests have moved elsewhere (from an app platform perspective, embedded Android is still vialble since everything you ship runs as a system app with no restrictions).

5
amrrs 2 days ago 7 replies      
Has Google ever released a report on how much time it takes an average flagship device to get the latest android version? Even if you've paid $$$$ in getting Samsung Galaxy S8 just recently, you're not going to get Android O tomorrow morning. But that's definitely the case with iOS. That makes a huge difference in the world where Software updates play a huge role in performance and functionality than hardware update (read. Image processing vs 13Mp to 16Mp camera) Google hasn't been successful in that.
2
Let Consumers Sue Companies nytimes.com
564 points by jseliger  1 day ago   265 comments top 23
1
flexie 1 day ago 3 replies      
In the EU you cannot bind consumers by such arbitration clauses:http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A3...

Consumers can usually sue corporations at a court in their own jurisdiction. Many European countries also allow class action law suits. Yet, we have few law suits against corporations. There are other reasons for this:

- consumers are not awarded punitive damages,

- court fees are higher (usually a percentage of what you ask for),

- if the consumers lose they pay not only their own lawyer, but (to an extent decided by the court) also the lawyer representing the corporation,

- many European countries have consumer "watchdogs" / ombudsmen, i.e. public entities that have the authority to start cases against corporations,

- many European countries have a variety of consumer complaint boards that handle small claims efficiently and at low cost.

Few who know consumer matters in both the US and the EU would trade the European system for the American.

2
rayiner 1 day ago 4 replies      
Class actions can be effective where the class members are relatively large, sophisticated entities. E.g. the data-breach class action brought by banks that is mentioned in the article. But in the consumer-protection space, we should consider alternatives. Where the class members are individual consumers, litigation ends up being lawyer-driven. Cases settle for pennies on the dollar of potential damages, and end up serving neither to compensate consumers nor really to deter illegal conduct.

Notably, in the EU, the tendency is to have more "ask for permission before doing something" regulation, and less "ask for forgiveness after doing something wrong" litigation. E.g. unlike in the U.S., there are laws setting forth detailed safety requirements for consumer products, and agencies responsible for enforcing those requirements. I suspect that approach yields the desired level of product safety at lower cost than the American approach. Similar approaches could, of course, be applied to consumer financial products.

3
hedora 1 day ago 4 replies      
I never understood why binding arbitration was legal for non-negotiated contracts.

Also, by reading this you agree all disputes between us will go through an arbitration firm of my choosing.

4
mxfh 1 day ago 2 replies      
Just look at how the EU consumer protection directives are working over here. You're simply not allowed to waive your guaranteed rights as a customer in some sort of EULA or TOS. And if you are forced to, the whole contract is void in it's entirety and you're free to walk away from it.
5
wimgz 1 day ago 6 replies      
only a lunatic or a fanatic sues for $30.

A bit off topic here but this is IMHO a great challenge for AI: making a lawyer affordable for the masses when they are bullied by banks, airlines, etc.

If it costs you $5 , why not sue for $30 ?

6
leoharsha2 1 day ago 1 reply      
I sued a company where I worked. They were not paying my final dues after I left the company. It took me 1 year and lots of visits to court just to get my final dues. Finally after a year of all such waste of time, they came to me and asked for a settlement to which I agree and I got part of my money.

In India, even if you know you will win the case, it is not worth it. It will take your mental peace. Although I highly recommend to sue companies in other countries, In India we should think twice before suing anyone.

7
s73ver_ 1 day ago 0 replies      
There is no valid reason why a company should require someone to sign away their rights, and it absolutely should not be allowed. Otherwise individuals might as well not have those rights at all.
8
clavalle 1 day ago 1 reply      
Capture of the machine of the justice system by the wealthy is one of the most impactful and persistent market distortions in human history.

If people cannot bring the power of government to enforce appropriate costs against players with more market power government of the People, by the People, and for the People has failed.

9
coolaliasbro 1 day ago 0 replies      
"First, opponents claim that plaintiffs are better served by acting individually than by joining a group lawsuit."

Then why should it matter if plaintiffs want to act collectively (ie, put themselves at a disadvantage according to the quote above)--wouldn't that benefit the opponents?

10
rrggrr 1 day ago 1 reply      
FACT: products liability law reduced accidental deaths in the workplace. (source: https://www.cdc.gov/mmwr/preview/mmwrhtml/mm4822a1.htm)

There is no reason similar laws wouldn't work for consumers privacy. Tort law needs to be applied aggressively to data privacy.

11
jseliger 1 day ago 0 replies      
This is a great idea. Contracts of adhesion that dominate our lives should not automatically and totally be stacked in favor of companies.
12
redm 1 day ago 4 replies      
I don't disagree with the sentiment of the article given the examples provided, i.e. Wells Fargo. That said, given the climate for frivolous lawsuits brought by "shakedown" attorneys, it opens the flood gates for something far worse.

Maybe a better compromise is to allow for binding arbitration UNLESS the company is found guilty of fraud or other illegal activity, such as Wells Fargo.

Alternatively, perhaps tort reform to prevent frivolous lawsuits would remove the need for arbitration.

13
Pica_soO 1 day ago 1 reply      
I wish there was a way to preemptive lawsuit a industry, before it even undertakes a operation. Basically, a group of people bets with lawsuits on the damages done to society by a industrial operation, forcing anyone to endeavor such operations to build a huge deposit of settlement and legalfees to cover the threat - on for example cutting down "rainforrest". As state actors have proven lousy wardens on these goods of society, maybe the interest of private stakeholders might put such a bounty on the head of damaging activity's, that these cease or be replaced with less "dangerous" replacement endeavors, previously not viable in a market economy that rewards distributed short term damages to everyone.
14
ct520 1 day ago 0 replies      
There's this one cool site to query outcomes of arbitration. No bueno.

http://levelplayingfield.io/

15
DannyBee 1 day ago 1 reply      
I'd love to understand what the end hope is.Very few class action lawsuits have resulted in any sort of permanent change.Also note the whole reason for class action lawsuits was efficient justice, and class action lawsuits are a fairly recent creation, so that's not entirely surprising.However the lawsuits that tend to change things tend to be "government vs".

Maybe it gives consumers a good feeling to be able to sue everyone, but is it actually helping anything?

Even in the past, people were not able to sue the telecoms or banks into having good customer service, or into not doing illegal things. Rarely, if ever, have they recouped the profits these companies made doing whatever.Instead, all the companies just treat it as "cost of business".I'm not sure it's really been a vehicle for effective change anymore.

Certainly arbitration won't be either, but maybe groups of super annoyed people may have better luck forcing the government into action than people placated by class actions where the government can wash its hands say "well, they already took care of it!"

16
socrates1998 1 day ago 0 replies      
I agree, these arbitration clauses are just another way corporations get away with screwing over consumers.
17
richardknop 1 day ago 1 reply      
This isn't already possible? That's not right that companies can hack the law like this. Consumers should be able to sue.
19
Khol 1 day ago 0 replies      
The oddest takeaway from this for me is that these arbitration clauses are banned for contracts for members of the military.
20
Animats 1 day ago 1 reply      
In 2010, the Consumer Financial Protection Bureau, which I direct, was authorized to study mandatory arbitration and write rules consistent with the study. After five years of work...

Talk about a schedule overrun. That project should have been finished in 2011.

21
exabrial 1 day ago 0 replies      
Call me cynical, but it seems most class action lawsuits end up paying $100m to a lawyer group and $5 to each individual :/
22
amelius 1 day ago 1 reply      
I was under the impression that class-action lawsuits were already a possibility for consumers.
23
necessity 1 day ago 1 reply      
Don't do business with companies that have such contracts? If there is no choice in a given sector then the issue is a monopoly not the contracts.
3
Inside a fast CSS engine hacks.mozilla.org
635 points by rbanffy  1 day ago   139 comments top 14
1
crescentfresh 1 day ago 10 replies      
I always wonder, who puts together nifty little blog posts on this kind of thing complete with graphics just for the article? By that I mean, literally what title do they have?

Myself and my colleagues would/could write up a technical breakdown of something neat or innovative we might have done to solve some problem at work, but we sure as shit can't make cool little graphics interspersed between opportune paragraphs, nor could we figure out how to make the thing entertaining to read.

Is this kind of thing done in coordination with like a PR/graphics department?

2
fpgaminer 1 day ago 0 replies      
Isn't it just crazy that we're gonna get all this cool tech in a browser that is completely free and open source?

And along the way, Mozilla created what is perhaps the most disruptive programming language of the past decade. For free. And open source.

It's really hard to appreciate the gravity of this.

3
robin_reala 1 day ago 1 reply      
I turned this on a couple of weeks ago on Nightly and have noticed precisely zero problems, and a really nice little speedup on CSS-heavy sites. Really good to see large chunks of parallelised Rust code start making their way over from Servo to Firefox.
4
sanxiyn 1 day ago 3 replies      
You may want to actually read this code. You can start by searching "LayoutStyleRecalc" at https://github.com/servo/servo/blob/master/components/layout.... Following is verbatim copy.

 // Perform CSS selector matching and flow construction. if traversal_driver.is_parallel() { let pool = self.parallel_traversal.as_ref().unwrap(); // Parallel mode parallel::traverse_dom::<ServoLayoutElement, RecalcStyleAndConstructFlows>( &traversal, element, token, pool); } else { // Sequential mode sequential::traverse_dom::<ServoLayoutElement, RecalcStyleAndConstructFlows>( &traversal, element, token); }

5
lucideer 1 day ago 6 replies      
It's great to see any company going into detail about their technical implementation, so I'm extremely hesitant to be critical, but I'm really curious who the target audience for this one particular article is.

It's a very very odd mix of language that sounds like it's directed at a very young child and standard technical speak. Not the usual for the Hacks blog.

Not to fault the article too much, but I just found the tone a bit confusing. Even veering towards condescension in some parts, though I'm certain that's entirely accidental and wasn't the author's intent at all.

6
pducks32 1 day ago 0 replies      
This is really great of Mozilla. Im really excited to see such a large rust project used on such a scale; after that I think there will be few doubts its a really really impressive language. Also the fact that Mozilla knew this and decided to take such a bold step as rewrite their engine is super cool. Ive done rewrites and they never go well so hats off to them.
7
aembleton 1 day ago 1 reply      
The writeup is inspiring. I found it very clear and yet reasonably in depth. It helps me to understand how much work modern browsers are doing.

Also, excellent use of Rust.

8
jancsika 1 day ago 7 replies      
> 4. Paint the different boxes.

Is this really what happens under the hood?

1. If I overlap 52 html <div>s like a deck of cards, does the browser really paint all 52 div rectangles before compositing them?

2. If I overlap 52 <g>s like a deck of cards, does the browser really paint all 52 <g>s before compositing them?

3. In Qt, if I overlap 52 QML rectangles like a deck of cards, does the renderer only paint the parts of the rectangles that will be visible in the viewport? I was under the impression that this was the case, but I may be misunderstanding how the Qt QML scenegraph (or whatever it is called) works in practice.

edit: typo

9
tannhaeuser 1 day ago 0 replies      
Congrats! Beyond the CSS engine itself, I also very much appreciate inside development stories like these. I'd also like to read a meta-story about the development efforts in terms of time spent, prior knowledge required etc., and CSS spec feedback, with a reflection on the complexity of implementing CSS from scratch.
10
t20n 1 day ago 0 replies      
Haven't even read it, just looked at the drawings and now i know how a browser parses css.
11
ndh2 10 hours ago 0 replies      
Very nice writeup!One thing I found strange is that multi-threading is ELI5, but the reader is expected to know what DOM means.
12
om2 1 day ago 2 replies      
I wish this post included some benchmarks or measurement.
13
c-smile 1 day ago 5 replies      
Parallel processing demonstrates benefits only if you have physical cores to run code on them. If just one core is available for the app then parallel processing is a loss due to thread preemption overload.

Is there any real life examples of achieved speedup?

14
kristofferR 1 day ago 1 reply      
It's such a shame Firefox (including the nightlies) kills my Mac (making most other applications hang/break), since the new versions are otherwise way better than Chrome.

Does anyone know what it is about Firefox that makes the rest of my system unable to spawn new processes?

4
Chrome Enterprise blog.google
409 points by pgrote  1 day ago   258 comments top 30
1
redm 1 day ago 20 replies      
I'm hesitant to invest anymore into the Google ecosystem after reading about how account termination can happen without detail, or recourse to resolve. [1] The last thing I need is more lock-in to a Google world.

[1] https://news.ycombinator.com/item?id=15065742

2
pducks32 1 day ago 7 replies      
I thought this was a special version of Chrome the browser and I think many people will too. Especially someone like my brother who works at a corporation. If they told him theyre switching to Chrome Enterprise hed be a tad confused.

Side Note: the reading experience on this blog is one of the best Ive seen on mobile. Love the text size though the header animation was not the smoothest. Nonetheless great job.

3
twotwotwo 1 day ago 2 replies      
One of my annoyances on consumer Chrome OS is that the built-in VPN support is tricky. There's a JSON format, ONC (https://chromium.googlesource.com/chromium/src/+/master/comp...), that maps to OpenVPN options. When I last used it the documentation was a bit tricky though it may have improved, I couldn't find ONC equivalents for some of my .ovpn options, and, most frustratingly, there was very little specific feedback if you try to import a configuration that isn't right. Because of all that I wonder if it was developed so Google could support specific large customers' VPNs (think school districts or companies) and its public availability was mostly an afterthought.

If you leave the GUI, you can also run openvpn yourself on a good old .ovpn file, but you lose some of the nice security properties you get with the default Chrome OS setup, you have to do cros-specific hacks to make it work (https://github.com/dnschneid/crouton/issues/2215#issuecommen... plus switching back and forth between VPN and non-VPN DNS by hand), and last I checked it made ARC (Play Store) apps' networking stop working.

I would consider paying a premium just to get my Chromebook connecting to work's VPN smoothly, though of course I'd love it if improved VPN functionality were available to everyone by default.

At some point I'm probably also going to take a second look at the latest ONC docs. It looks like they've improved since I first looked at VPN setup a while back.

4
jstewartmobile 1 day ago 5 replies      
Sounds great until they shut your shit down without explanation, and all you're left with is a support number that is about as helpful as a brick wall...
5
pat2man 1 day ago 0 replies      
This is probably the perfect OS for any shared terminal: libraries, internet cafes, etc. You don't need native apps, just a locked down browser that can keep your settings and bookmarks across devices.
6
tbyehl 1 day ago 2 replies      
Is this just a re-branding of "Chrome device management"?

I wish they'd come up with something Family-oriented. I've got my mom, girlfriend, and girlfriend's children all using low-end Chromebooks / Chromebases as their primary computers, and I'm using one for about 80% of my computing. Chrome device management would be useful for us but $50/year per device plus needing to buy G.suite per user is a bit much.

7
niftich 1 day ago 0 replies      
Notwithstanding the Active Directory integration, this is the clearest shot across the bow of Microsoft's on-prem management suite yet.

The naming is puzzling. But I'm sure MS shops are used to weird names, and aren't likely to get pedantic about whether or not there should be an "OS" in there. They likely went with the simpler name to build on mindshare among decision-makers, and to intentionally muddy the waters to their benefit.

8
Havoc 1 day ago 2 replies      
A big chunk of business is dead in the water without Excel (and to a lesser extent Word/Powerpoint).

And no don't tell me google sheets. Great for sharing data...ultra crap for data manipulation.

9
solatic 1 day ago 4 replies      
Lots of enterprises out there with many users who need nothing more than a web browser, email, light word processing and maybe slideshow software. Active Directory integration makes the migration possible. Chrome OS provides it all in a way which dramatically reduces maintenance costs compared to Windows.

If Google starts showing some reduced TCO figures, they'll start to pull a lot of converts.

10
Multicomp 1 day ago 2 replies      
$50 bucks per device per year? For what, extra management frameworks on a chromebox? What a bargain /s
11
devrandomguy 1 day ago 1 reply      
On a related note, does anyone know how to bury a dead corporate user account? The company that gave it to me doesn't even exist anymore, but Google keeps insisting that "account action is required". The company terminated my login shortly before imploding, and I lost the associated phone number when I fled the country, so there is no way that I can get back in to shut it down myself.

I suppose I will eventually just buy a new phone in a few years, but I'm not thrilled about all that private work / business data that is sitting in limbo.

12
MBlume 1 day ago 1 reply      
I would really like to have a computer for use at work where my IT department could feel like they had assurance that it was secure/virus free/malware free but from which I could sign into my personal accounts without feeling like I'm opening them to my IT department. Right now I just carry two laptops in my bag and it's really annoying. Wondering if Chrome Enterprise will enable this sort of thing.
13
trequartista 1 day ago 3 replies      
While there is Google Play Integration, there is no word on how they plan to integrate the corporate intranet - which is littered with thousands of custom applications ranging from payroll to HR to ticket and incident management.
14
bbarn 1 day ago 1 reply      
I suspect Active Directory integration might make this actually have legs. Especially in the educational industry.
15
morpheus88 23 hours ago 0 replies      
Created a throwaway for this. But Google has a reputation for shutting you down without resort to any recourse and I can attest to this personally. Hope I'm not off topic, but I had a successful Android App which was taken down from the Playstore because I used a single keyword that was copyright, but it was really essential for this app and I had provided context for using the keyword. It was a free app anyway and I was making no money from it (no ads either). Anyway, they removed my app from the store and I had no way to get it back up- all my ratings, downloads, reviews were lost. The point here is that they didn't give me a chance to defend myself- one strike, and you're out and never coming back again. Imagine enterprises using Google products with this sort of an attitude.
16
bedhead 1 day ago 0 replies      
Sounds great until you realize that their "Hate Algorithm" or whatever will end up erroneously shutting down your computer one day.
17
massar 1 day ago 1 reply      
I hope they finally acknowledge the Security Bypass they have in this "Enterprise" version... where it will be even more serious

https://bugs.chromium.org/p/chromium/issues/detail?id=718831https://bugs.chromium.org/p/chromium/issues/detail?id=696378etc...

It is fun to report those things to Google Project Zero and then find that people on that side obviously do not understand that security bypasses are... well... security issues.

full submission reproduced below, just in case they radar-disappear the item... duping items is apparently what Project Zero does so that the items disappear from Google results...

---PREAMBLE

Thank you for an amazingly solid looking ChromeOS. Happy that I picked up a nice little Acer CB3-111, thought about plonking GalliumOS/QubesOS or heck OpenBSD on it, but with the TPM model and the disk wiping, not going to.

Just wanted to note this discovery so that you are aware of it and hopefully can address the problem as it would improve the status quo. Keep up the good work!

Greets, Jeroen Massar <jeroen@massar.ch>

VULNERABILITY DETAILS

By disabling Wireless on the login screen, or just not being connected, only a username and password are required to login to ChromeOS instead of the otherwise normally required 2FA token.

This design might be because some of the "Second Factors" (SMS/Voice) rely on network connectivity to work and/or token details not being cached locally?

But for FIDO U2F (eg Yubikeys aka "Security Key"[1]) and TOTP no connectivity is technically needed (outside of a reasonable time-sync). The ChromeOS host must have cached the authentication tokens/details though to know that they exist.

The article at [2] even mentions "No connection, no problem... It even works when your device has no phone or data connectivity."

[1] https://support.google.com/accounts/answer/6103523?hl=en[2] https://www.google.com/intl/en/landing/2step/features.html

VERSION

Chrome Version: 59.0.3071.35 devOperating System: ChromeOS 9460.23.0 (Official Build) dev-channel gnawty Blink 537.36 V8 5.9.211.16

REPRODUCTION CASE

First the normal edition:- Take a ChromeOS based Chromebook (tested with version mentioned above)- Have a "Security Key" (eg Yubikeo NEO etc) enabled on the Google Account as one of the 2FA methods.- Have Wireless enabled- Login with username, then enter password, then answer the FIDO U2F ("Security Key") token challenge

All good as it should be.

Now the bad edition:- Logout & shutdown the machine- Turn it on- Disconnect the wireless from the menu (or just make connectivity otherwise unavailable)- Login with username, then password- Do NOT get a question about Second Factors, just see a ~5 second "Please wait..." that disappears- Voila, logged in.

That is BAD, as you just logged in without 2FA while that is configured on the account.

Now the extra fun part:- Turn on wireless- Login to Gmail/GooglePlus etc, and all your credentials are there, as that machine is trusted and cookies etc are cached.

And just in case (we are now 'online' / wireless is active):- Logout (no shutdown/reboot)- Login with username, password.... and indeed asks for 2FA now.

Thus showing that toggling wireless affects the requirement for 2FA.... and that is bad.

EXPECTED SITUATION

- Being asked for a Second Factor even though one is not "online".

As now you are walking through say an airport with no connectivity, and even with the token at home, just the username and password would be sufficient to login.

SIDE NOTE

For the Google Account (jeroen@massar.ch) I have configured: - "strong" password

and as Second Factors: - FIDO U2F: Two separate Yubikeys configured - TOTP ("Google Authenticator") configured - SMS/Voice verification to cellphone - Backupcodes on a piece of paper in a secure place.

Normally, when connected to The Internet(tm), one will need username(email), password and one of the Second Factors. But disconnect and none of the Second Factors are needed anymore.

SIDE NOTE2

The Google Account password changer considers "GoogleChrome" a "strong" password.... might want to check against a dictionary that such simple things cannot be used, especially as 2FA can be bypassed that easily.....

18
jdauriemma 9 hours ago 0 replies      
Random observation: the font-background contrast ratio in this post makes it very hard to read comfortably.
19
gangstead 1 day ago 0 replies      
I don't believe the checkmark indicating "Cloud & Native Print" support on Chrome OS. I've got two Chromebooks and have used Chromeboxes at work and have never gotten printing to work reliably.
20
chaudhary27 1 day ago 1 reply      
I don't like to lock into Google ecosystem at work but I also hate some Microsoft services at work.
21
open-source-ux 1 day ago 3 replies      
Not a popular opinion here I know, but I'll say it anyway. Not a single word in that blog post about privacy.

Chrome OS is already widely used in US schools (and tracks student online activities), now we have a 'business-friendly' version of Chrome OS.

What kind of analytics does a cloud OS like this record? What does Google do with that data? Even if that data is 'anonymised' (a pretty meaningless term nowadays), in aggregated form that gives Google staggering quantities of data that they can mine for the future. Why did Google not even mention the word privacy once in that blog?

22
jaypaulynice 1 day ago 0 replies      
$50/device?? With that said, I suspect Facebook is working on a browser...that could compete well with Chrome...any reason why Facebook hasn't developed a browser?
23
booleandilemma 1 day ago 1 reply      
I assumed this was an enterprise version of Chrome, with the main difference being it doesn't auto update, thus being more friendly to the IT departments who administer a company's computers.
24
killjoywashere 1 day ago 0 replies      
David was working on the smart card authentication system for ChromeOS not too long ago. Glad to see this maturing.
25
demarq 1 day ago 1 reply      
That is a very compelling price point.
26
ben174 1 day ago 2 replies      
I've been seeing IT become increasingly frustrated at their inability to lock down the security on MacOS to the level they'd hoped. Wouldn't be surprised to see silicon valley startups issue Chromebooks out as the default in 3-4 years time. Especially if Google gets this right.
27
mnd999 18 hours ago 0 replies      
Is it April 1 already?
28
hiram112 1 day ago 1 reply      
I've always had the belief that the Microsoft juggernaut would continue its slow decline in relevance as mobile and web devices removed the need for Windows, and the improvement of apps like Google Docs, OpenOffice, etc. would eat away at Office from the other side.

But I really think now we're approaching the point where their fall might happen swiftly. Chromebooks are fine for the majority of corporate users. And if they catch on, there is no need for any of the Active Directory / Azure tie-ons that MS has been hoping would pull enterprise customers towards Azure, Office 365, and all the rest.

And even if Microsoft can convince customers to stay, they simply won't be able to charge the same prices they've enjoyed for decades now with the overpriced Office, Server, and Client access licenses.

And once an enterprise moves away from Active Directory and Office, I don't see any benefit of using the very expensive Sharepoint, Outlook, OneDrive, and other apps that have always been overpriced, but worth it as they integrated well together and saved companies more money via lower IT costs.

29
darkr 1 day ago 0 replies      
> According to Ed Higgs, Interim Director of Global Service Delivery for Group IT at Rentokil: With over 500 Chromebooks in use in our organization, Chrome now forms part of our standard offering within Rentokil Initial."

500? Do you even lift bro?

30
frik 1 day ago 1 reply      
Please change the title to "Chrome OS Enterprise" - it's not Chrome browser enterprise.
5
iOS 11 Safari will automatically strip AMP links from shared URLs twitter.com
367 points by OberstKrueger  7 hours ago   284 comments top 23
1
cramforce 4 hours ago 14 replies      
TL of AMP here.Just wanted to clarify that we specifically requested Apple (and other browser vendors) to do this. AMP's policy states that platforms should share the canonical URL of an article whenever technically possible. This browser change makes it technically possible in Safari. We cannot wait for other vendors to implement.

It appears Safari implemented a special case. We'd prefer a more generic solution where browsers would share the canonical link by default, but this works for us.

2
millstone 5 hours ago 3 replies      
I hope the next step is a way to strip AMP links from all URLs, backfilling the "Disable AMP" setting that Google ought to have provided.

AMP has always worked poorly on iOS: it has different scrolling, it breaks reader mode, and it breaks status bar autohide and jump-to-top. Perhaps Apple would be less hostile to AMP if the implementation were better.

3
OtterCoder 5 hours ago 3 replies      
Thank heavens. Google's efforts to 'improve' the web have been disastrous. Like turning every list of facts into a pointless ramble because Google needs 1000 words of 'rich content'. And Amp being a push to hobble pages by making them into a proprietary cache format instead of encouraging simpler HTML.
4
recursive 5 hours ago 8 replies      
I don't know much about the implementation details of AMP. But my perspective as an end user is that it's pretty great. Non-amp pages tend to take multiple seconds to get interactive, and then the content jumps around as images and ads and fonts load. AMP tends to be usable in under half a second.
5
chipotle_coyote 4 hours ago 4 replies      
Just to make sure I'm following this myself: this isn't about disabling AMP, it's about making sure that URLs that you send to other applications or the clipboard from Safari will be the true URLs of the original web page, not AMP URLs. Right? That's the only way I can read "strip AMP links from shared URLs," but a lot of comments here are piling on to AMP itself. Which I understand (I don't like it for a lot of the reasons already brought up here, both in terms of philosophy and usability), but I don't think that's what we're actually talking about.
6
plasma 4 hours ago 1 reply      
Here's my AMP experience on Reddit:

1) Click the AMP link in Google2) See half the reddit thread comments, since AMP is a cached older version, I then need to click "View full comments" which gets me to the mobile reddit link I wanted in the first place

3) For some reason, links in AMP reddit dont let you open in a new window -- which I often want to do when reading comments inline and see someone post a link in a comment. Frustrating.

AMP cost me more time and effort than having just gone to the non-AMP link directly.

7
matt4077 4 hours ago 1 reply      
People in this thread seem to be overreacting like my immune system on strawberries....

Apple isn't taking up the good fight against AMP. They're simply removing it from URLs when those are shared via the build-in "share" functionality.

99% of interactions with AMP pages will see no impact. It seems like it should be uncontroversial that it is preferable to use the canonical non-AMP URI when sharing, so as to invoke content-negotiation between that page and the devise of whoever clicks on the shared link.

8
yegle 5 hours ago 1 reply      
It might be simply because Safari respect https://en.wikipedia.org/wiki/Canonical_link_element

 <link rel="canonical" href="http://www.cnn.com/2017/08/23/politics/trumps-ire-at-aides-advice/index.html">
EDIT: correct HTML element <link rel="canonical">

9
SubiculumCode 5 hours ago 3 replies      
I detest AMP and everything it stands for. I'd like to see this in Firefox.
10
Angostura 4 hours ago 1 reply      
I wish it had an option to simply strip AMP in all circumstances. I quite often copy & paste URLs into e-mails, for example. It's irksome to have to request the desktop version just to get the proper domain and URL.
11
mrmondo 5 hours ago 1 reply      
I absolutely support this move, I've been trying to rid myself of AMP since it was released and this at least stops me from having to clean up URLs when sharing them.
12
whalesalad 4 hours ago 0 replies      
I guess I am the only person who appreciates AMP. I really love the abbreviated version of things, without all the extra crap that has to load over the wire to render what amounts to a few paragraphs I'd like to read.
13
tannhaeuser 5 hours ago 4 replies      
Hopefully Apple fighting AMP makes it a non-starter going forward now since it isn't reaching lucrative iOS users.
14
jonluca 3 hours ago 0 replies      
Hopefully it'll fix some of the issues with getting redirected on desktops when people share links here/on reddit
15
nodamage 4 hours ago 0 replies      
My understanding is that AMP pages load a cached version of the content inside a nested iframe. Doesn't this break browser extensions/bookmarklets that rely on processing the current HTML on the loaded page? Stuff like Instapaper or Pocket, for example?
16
lokedhs 3 hours ago 0 replies      
I only wish there was a way to automatically have those pesky t.co links resolved when sharing.

I don't use Twitter, but other people do, and when they share a link to me I'd like to be able to 1) see what I am clicking on, and 2) avoid sending analytics information to Twitter when I do so.

17
Nuance 3 hours ago 0 replies      
18
guelo 3 hours ago 0 replies      
Would be great if it got rid of twitter shortened urls and other similar walled-garden cruft.
19
emilfihlman 4 hours ago 0 replies      
AMP is horrible and should just be buried.
20
mratzloff 5 hours ago 0 replies      
Accelerated Mobile Pages, for those who are curious.

https://www.ampproject.org

21
Fej 3 hours ago 0 replies      
Anyone else think that the EU is going to file an antitrust suit against Google over AMP? Seems possible given their lack of hesitance in the past.
22
jtl999 4 hours ago 0 replies      
Thank goodness

On a related note, anyone know an extension for desktop Chrome that converts AMP pages to non-AMP pages? Or is that not programmatically possible?

23
sergiotapia 4 hours ago 2 replies      
I love Apple the most when they throw their weight around for the benefit of the end user. It's pretty clear that Google's main concern is ads ads ads.

Apple already has my money, they focus 110% on making my life easier and their products fantastic. Go Apple!

6
How JavaScript works: inside the V8 engine sessionstack.com
471 points by zlatkov  2 days ago   110 comments top 10
1
Veedrac 2 days ago 12 replies      
> How to write optimized JavaScript

This is all sensible advice if you're interested in writing fast-enough code, but I do find there's a lack of material for people who want to write fast Javascript. Pretty much the only thing I've found is the post by Oz[1], though I really don't want to have to compile Chrome.

For an example, I have a method in Javascript that does a mix of (integer) arithmetic and typed array accesses; no object creation or other baggage. I want it to go faster, and with effort I managed to speed it up a factor of 5. One of the things that helped was inlining the {data, width, height} fields of an ImageData object; just moving them to locals dropped time by ~40%.

Yet after all this effort, mostly based on educated guesses since Chrome's tooling doesn't expose the underlying JIT code for analysis, the code is still suboptimal. There's a pair of `if`s that if I swap their order, that part of the code allocates. How do people deal with these issues? A large fraction of this code is still allocating, and I haven't a clue where or why.

Perhaps I'm asking too much from a language without value types (srsly tho, every fast language has value types), but what I want is clearly possible: ASM.js does it! I don't really want to handwrite that, though.

[1]: https://www.html5rocks.com/en/tutorials/performance/mystery/ (E: This link is actually written from a different perspective than the one I read, but the content is the same.)

2
fdw 2 days ago 0 replies      
If you're into V8 internals, I'd recommend watching these talks by Franzsika Hinkelmann, a V8 engineer at Google: https://www.youtube.com/watch?v=1kAkGWJZ6Zo , https://www.youtube.com/watch?v=B9igDWV5ZUg and https://www.youtube.com/watch?v=p-iiEDtpy6I&t=606s

She's also recently started blogging at https://medium.com/@fhinkel

3
cm2187 2 days ago 11 replies      
I was wondering, given that 90% of the javascript in browsers must be standard libraries (jquery, bootstrap & co) wouldn't it make sense for google to hash the source code for every published version of these libraries, compile those statically using full optimisation and ship the binaries as part of their updates to the browser, so that you only have to compile the idiosyncratic part of the code?
4
yanowitz 2 days ago 1 reply      
Interesting article--I'd love to see one just on GC.

I just downloaded the latest node.js sources and v8 still has a call to CollectAllAvailableGarbage in a loop of 2-7 passes. It does this if a much cheaper mark-and-sweep fails. Under production loads, that would occasionally happen. This led to pause-the-world-gc of 3600+ms with v8, which was terrible for our p99 latency.

The fix still feels weird -- we just commented out the fallback strategy and saw much tighter response time variance with no increased memory footprint (RSS).

I never submitted a patch though because although it was successful for our workload, I wasn't sure it was generally appropriate (exposed as a runtime flag) and I left the job before I could do a better job of running it all down.

5
btown 2 days ago 0 replies      
Is there a way to see what hidden class an object has? For instance, if an array of objects is parsed from json, were all the objects assigned the same hidden class? Alternately, can one obtain statistics about hidden class usage? Seems like this would be very helpful for real world apps, especially given the prevalence of data intensive Electron apps.

EDIT: https://www.npmjs.com/package/v8-natives haveSameMap seems to do exactly this!

6
dlbucci 2 days ago 3 replies      
> Also, try to avoid pre-allocating large arrays. Its better to grow as you go.

Is this really true? I've only heard the opposite (preallocate arrays whenever possible) and I know that preallocation was a significant performance improvement on older devices with older javascript engines.

7
kevmo314 2 days ago 1 reply      
> Now, you would assume that for both p1 and p2 the same hidden classes and transitions would be used. Well, not really. For p1, first the property a will be added and then the property b. For p2, however, first b is being assigned, followed by a. Thus, p1 and p2 end up with different hidden classes as a result of the different transition paths. In such cases, its much better to initialize dynamic properties in the same order so that the hidden classes can be reused.

Does that mean an object with n properties takes up O(n^2) memory for the class definitions or O(n!) if the classes do not guarantee a property initialization order?

8
sjrd 2 days ago 0 replies      
For a more comprehensive reference on v8 internals, there is http://wingolog.org/tags/v8, which has been around for a long time. It is even directly referenced from https://developers.google.com/v8/under_the_hood.

I don't think there's anything in this post that wasn't already explained in this reference, except the fact that now there's Ignition and TurboFan, but that doesn't fundamentally change anything.

9
schindlabua 1 day ago 0 replies      
Does this also mean that in the function that returns an object vs. prototype debate the former wins because object literals presumably require less class transitions?

let Ctor = function(a, b){ this.a = a; this.b = b;}

vs

let obj = (a,b) => ({ a:a, b:b});

10
cryptozeus 2 days ago 0 replies      
Thanks for the post, great writing...
7
Examining a vintage RAM chip, I find a counterfeit with a different die inside righto.com
455 points by darwhy  2 days ago   136 comments top 20
1
NikolaNovak 2 days ago 3 replies      
Hah... Through the first few sentences I kept wondering which wondrous architecture are we talking about, that "64-bit" memory chip is considered "vintage"...?

It took me embarrassingly long to realize that it's not 64-bit bus, it's a 64-bit chip... holding an amazing 4x16bits=64bits of data total.

Just goes to show it's hard to be sure where your unspoken assumptions may lie.... :-)

2
pavel_lishin 2 days ago 6 replies      
> As for Robert Baruch's purchase of the chip, he contacted the eBay seller who gave him a refund. The seller explained that the chip must have been damaged in shipping!

I think at that point, you report them to Ebay for fraud, don't you? Or is that just spitting in the ocean?

3
todd8 2 days ago 1 reply      
This, along with the complaints in the comments here, is quite discouraging. I'm ready to give up on third-party sellers on Amazon, see https://news.ycombinator.com/item?id=14993216 [I Fell Victim to a $1,500 Used Camera Lens Scam on Amazon], and now Ebay looks like it's not going to be a viable alternative.
4
happycube 2 days ago 5 replies      
"The eBay seller gave him a refund. The seller explained that the chip must have been damaged in shipping! (Clearly you should pack your chips carefully so they don't turn into something else entirely.)" ;)
5
jk2323 2 days ago 4 replies      
"Why would someone go to the effort of creating counterfeit memory chips that couldn't possibly work? The 74LS189 is a fairly obscure part, so I wouldn't have expected counterfeiting it to be worth the effort. The chips sell for about a dollar on eBay, so there's not a huge profit opportunity. "

This sounds obscure. Small/Tiny mark up. Small market. High fake detection rate. I wonder if there is something about the story that we miss.

6
thmsths 2 days ago 5 replies      
I would be interested to know how the Pentagon deals with those 15% of counterfeit ICs, the implications are quite scary.
7
robryk 2 days ago 4 replies      
If one counterfeits a chip using something that will not work at all, why put any chip inside at all? Why not just place a resistor between VCC and GND?
8
windlessstorm 2 days ago 2 replies      
Thanks for this, was an awesome read. Any more such blogs for learning and getting into electronics and such low level stuffs?

PS. I am newbie software engineer (c/networking) and recently fascinated and drawn towards electronics.

9
jeffwass 1 day ago 0 replies      
One of the first pics in that article comes from an earlier chip he previously reviewed - the Intel 3101. I'm proud to say my dad provided Ken with those two Intel 3101 chips.

Ken's review of the 3101 is here:http://www.righto.com/2017/07/inside-intels-first-product-31...

This is the first IC ever produced by Intel.

My dad had a few of these chips from an old computer. Some of the 3101 chips are from such early runs they don't even have the usual date stamps on the packages, and were outsourced by Intel into generic wirebonded IC packages.

10
brooklyntribe 2 days ago 0 replies      
From his posts, he's like the smartest person in the world. At least that's my impression.

Mine bitcoin with paper and pencil? Is anyone else in the world even thinking about something so far out?

11
CamperBob2 2 days ago 1 reply      
The motivation (for the use of an LFSR instead of a traditional counter) is a shift register takes up less space than a counter on the chip; if you don't need the counter to count in the normal order, this is a good tradeoff

That's kind of a profound observation, even though it's obvious once you think about it. It never occurred to me that a maximal-length shift register is actually a simpler, more efficient logic structure than either a carry-chain adder or a ring counter.

12
jxramos 2 days ago 1 reply      
Very cool article. I found myself strangely hit with a wave of nostalgia when the piece came upon "DTMF: dialing a Touch-Tone phone"
13
kazinator 2 days ago 0 replies      
> Why would someone go to the effort of creating counterfeit memory chips that couldn't possibly work?

Because it's maybe a mistake?

Some of the people people working at the factory don't know a potato chip from a silicon chip?

True counterfeit chips use the correct die. It is stolen, but the knock-offs cut corners: you're getting something that is not quality controlled, or perhaps even a reject off the factory floor (that might just work in your use case so you won't notice).

Sometimes counterfeit chips use a different implementation, but of the right general spec. Well sort of:

https://news.ycombinator.com/item?id=14685671 ("Ti NE555 real vs fake: weekend die-shot ").

14
kumarvvr 2 days ago 5 replies      
So, the chip is fake, but how come such chips could work satisfactorily in their place in a PC??
15
kutkloon7 2 days ago 0 replies      
Ken Shirriff is amazing. His blog entries are really worth reading.
16
agjacobson 2 days ago 0 replies      
Why do you think the 74LS189 was being counterfeited?It was the touch tone chip being counterfeited, and disguised as a 74LS189. The buyer knew the ruse.
17
yuhong 2 days ago 0 replies      
This reminds me of the 1988 DRAM shortage.
18
jackblack8989 2 days ago 4 replies      
Any experts here care to tell how does one check for RAM quality? Does CPU-Z do it? (Writing from work, don't have admin perms to use)

Not talking about this particular case, but maybe case of a RAM not working in general.

19
basicplus2 2 days ago 0 replies      
Now I know why that project I designed didn't work...
20
gesman 2 days ago 2 replies      
Chip inside of the computer with a little telephone hidden inside.

You don't mind if your computer will dial in to China sometime, do you?

8
A Thorium-Salt Reactor Has Fired Up for the First Time in Four Decades thoriumenergyworld.com
316 points by jseliger  9 hours ago   104 comments top 21
1
philipkglass 8 hours ago 2 replies      
Better explanation here (linked from Technology Review): http://www.thoriumenergyworld.com/news/finally-worlds-first-...

More details on the experiment sequence: https://public.ornl.gov/conferences/MSR2016/docs/Presentatio...

This is not actually a reactor test because the thorium-bearing salt does not attain criticality. It's a sequence of materials tests using thorium-containing salt mixtures in small crucibles inside the conventionally fueled High Flux Reactor (https://ec.europa.eu/jrc/en/research-facility/high-flux-reac...).

The experiments rely on neutrons from the High Flux Reactor to induce nuclear reactions in the thorium-bearing salt mixtures. However, the experiments will be useful in validating materials behavior for possible future molten salt reactors because it combines realistic thermal, chemical, and radiation stresses.

2
Sukotto 3 hours ago 5 replies      
I think we're making a serious PR mistake calling these "Thorium Reactors" even though the term is accurate.

"Reactor" evokes "Nuclear Reactor". For many people, "nuclear reactor" is a deeply loaded term. Likewise "Thorium" (and other words that end in "-ium") sounds dangerously like "plutonium" and "uranium".

It doesn't matter how much better/safer this technology is. Don't expect the public to respond positively when we use those words. There's too much knee jerk, "no nukes!" baggage.

We should start calling these "salt power stations" or something else accurate, yet non-threatening. Otherwise, IMHO, it will be a steep uphill battle getting public and legislative support for building these things, regardless of their many benefits.

3
ChuckMcM 7 hours ago 0 replies      
A couple of comments;

First it is really awesome to see actual research experiments being done on the materials. This is a critical first step in understanding the underlying complexity of the problems and as the article points out it is really helpful to have a regulatory agency that is open to trying new things.

The second is this isn't a 'Thorium-Salt Reactor' it is 'parts that would go into parts that would make up such a reactor if the experiments indicate they will work.' A much less clickbaitey headline but such is 21st century journalism.

4
PaulHoule 7 hours ago 4 replies      
I am surprised they are using stainless steel instead of Hastelloy-N

http://www.haynesintl.com/alloys/alloy-portfolio_/Corrosion-...

The Hastelloy family of super alloys is basically stainless steel without the steel and was proven in the Oak Ridge MSR experiment.

5
dabockster 5 hours ago 5 replies      
> charged particles traveling faster than the speed of light in water

What did I just read?

6
velodrome 7 hours ago 1 reply      
This technology, if viable, could help solve our current nuclear waste problem. Valuable materials could be recycled (by separation) for additional use.

https://en.wikipedia.org/wiki/Nuclear_reprocessing#Pyroproce...

https://youtu.be/oAVCaUonrbE?t=12m7s

7
jhallenworld 6 hours ago 0 replies      
So there was a meltdown at a liquid sodium cooled reactor due to a materials problem:

https://en.wikipedia.org/wiki/Sodium_Reactor_Experiment

I don't see a pump seal test in this experiment... does anyone know if a solution to the SRE meltdown problem is known at this point? Perhaps the LFT chemistry would not have the issue.

8
skybrian 6 hours ago 0 replies      
This was apparently at the High Flux Reactor in Petten, Netherlands.

https://articles.thmsr.nl/petten-has-started-world-s-first-t...

9
bhhaskin 6 hours ago 0 replies      
Really happy to finally see some movement with Thorium. It might not be the magic silver bullet that some people hype it up to be, but it needs to be explored.
10
novalis78 3 hours ago 0 replies      
Great that they are mentioning ThorCon's project in Indonesia. Too bad that they had to leave the US after trying really hard to find a way to build it here.
11
zython 7 hours ago 2 replies      
I was under the impression that thorium-salt reactors have been tried in the past and not deemed "worth" from security and profitability point of view.

What has changed about that ?

12
xupybd 6 hours ago 1 reply      
"The inside of the Petten test reactor where the thorium salt is being tested is shining due to charged particles traveling faster than the speed of light in water."

What I thought that wasn't possible? Or is this just the speed of light in water, so the particles are still moving slower than the speed of light in a vacuum?

13
nate908 7 hours ago 4 replies      
What's up with this image caption?

"The inside of the Petten test reactor where the thorium salt is being tested is shining due to charged particles traveling faster than the speed of light in water."

As I understand it, nothing travels faster than the speed of light. The author is mistaken, right?

14
tim333 8 hours ago 0 replies      
Glad to see they are resuming research even though there remain problems with it as a commercial technology.
15
unlmtd1 58 minutes ago 0 replies      
I have a better idea: horsecarts and sailships.
16
SubiculumCode 5 hours ago 2 replies      
Anyone with insight on this I read years ago: http://www.popularmechanics.com/science/energy/a11907/is-the...

I worry that that if Thorium reactors become very very common because they are thought to be very safe (e.g. behind your house common, as some have bragged), but they turn out to be dangerous...we will have a real problem.

17
zmix 5 hours ago 1 reply      
As far as I know the Chinese are also putting much effort into this type of reactor.
18
acidburnNSA 2 hours ago 0 replies      
Glad to see some thorium-bearing salt being irradiated in a conventionally-fueled test reactor. That's a big step to getting back on the road to fluid-fueled reactors.

Here are some reminders for everyone on the technical info about Thorium. First of all, Thorium is found in nature as a single isotope, Th-232, which is fertile like Uranium-238 (not fissile like U-235 or Plutonium-239). This means that you have to irradiate it first (using conventional fuel). Th-232 absorbs a neutron and becomes Protactinium-233, which naturally decays to Uranium-233, a fissile nuclide and good nuclear fuel. This is called breeding. Thorium is unique in that it can breed more fuel than it consumes using slow neutrons, whereas the Uranium-Plutonium breeder cycles require fast neutrons (which in turn require highly radiation-resistant materials, higher fissile inventory, and moderately exotic coolants like sodium metal or high-pressure gas). Any kind of breeder reactor (Th-U or U-Pu) can provide world-scale energy for hundreds of thousands of years using known resources and billions of years using uranium dissolved in seawater (not yet economical).

Great, so Thorium can do thermal breeding, so what? Well to actually breed in slow neutrons, you have to continuously remove neutron-absorbing fission products as they're created (lest they spoil the chain reaction), so you really can only do this with fluid fuel. This leads to an interesting reactor design called the Molten Salt Reactor (MSR). Fun facts about this kind of reactor are that it can run at high temperatures (for process heat/thermal efficiency), can run continuously (high capacity factor), is passively safe (can shut down and cool itself without external power or human intervention in accident scenarios), and doesn't require precision fuel fabrication. Downsides are that the radionuclides (including radioactive volatiles) are not contained in little pins and cans like in solid fueled reactors so you get radiation all over your pumps, your heat exchangers, and your reactor vessel. This is a solvable radiological containment issue (use good seals and double-walled vessels) but is a challenge (the MSRE in the 1960s lost almost half of its iodine; no one knows where it went!!)

U-Pu fuel can work in MSRs as well, getting those nice safety benefits, but it can't breed unless you have fast neutrons.

People on the internet may tell you that Thorium can't be used to make bombs and that it's extremely cheap, etc. These are not necessarily true. You can make bombs with a Th-U fuel cycle (just separate the Pa-233 before it decays), and nuclear system costs are unknown until you build and operate a few. There are reasons to hope it could be cheaper due to simplicity, but there are major additional complexities over traditional plants or other advanced reactors in the chemistry department that add a lot of uncertainty. Fluid fueled reactors are probably ~100x or more safer than traditional water-cooled reactors, on par with sodium-cooled fast reactors and other Gen-IV concepts with passive decay heat removal capabilities.

19
ece 4 hours ago 0 replies      
India has the most thorium reserves, according to USGS: https://en.wikipedia.org/wiki/Occurrence_of_thorium

And they have had the plans and motivation to build domestic reactors for the past two decades: https://en.wikipedia.org/wiki/India%27s_three-stage_nuclear_...

NSG membership keeps getting held up by someone or the other and would provide more energy security for India.

http://timesofindia.indiatimes.com/india/nuclear-reactor-at-...

20
genzoman 8 hours ago 1 reply      
very excited about this tech, but i think it will be regulated to death.
21
MentallyRetired 6 hours ago 0 replies      
How'd you like to be the guy pressing the button for 40 years?
9
Disconnect. Offline only bolin.co
465 points by danmeade  19 hours ago   164 comments top 40
1
mck- 12 hours ago 6 replies      
Beauty. Almost a piece of art.

I was on a plane yesterday (literally on airplane mode) and I finished a book I've been working on for a month, and prepped/wrote half of a presentation. Quite often I produce much of my writing on a plane.

I find myself very productive on a plane. Especially on cheap flights that don't have in-flight entertainment. Literally no distractions for a preset amount of time. You're not only offline, you're also physically stuck. Best way to make the time fly by is by being productive.

2
beat 13 hours ago 5 replies      
For those interested in managing online time and getting ourselves offline regularly, the book Deep Work, by Cal Newport, has some very useful ideas. One that I plan to start experimenting with is the idea of scheduled internet access - allow yourself to get online only at certain times of day. This isn't just for work. Even if you're, say, standing in line at the grocery store, you don't get to pull your phone out and check your email.

As the author points out, we've forgotten how to be bored. We need to learn to engage that part of our brain again.

https://www.amazon.com/Deep-Work-Focused-Success-Distracted/...

3
nicklaf 7 hours ago 1 reply      
The funny thing is, smartphones are all but useless for many tasks the minute you go into airplane mode. There are exceptions, but you're basically holding a client to a distributed operating system which has appropriated many of the promises of wearable personal computing for corporate profit.

So yes, if you've already ceded your right to not be inturupted by running apps like Twitter and Facebook in the background, then I can see the appeal of cord cutting.

Of course there also exists the possibility of at least trying to use these devices without ceding this autonomy in the first place, but that requires admitting just how little today's social media offerings will have to do with this approach.

And no, a smartphone is not the right place to do research anyway. In fact, neither is the WWW using off-the-shelf browsers, but it's the version of Hypertext we're stuck with for now.

4
gervase 12 hours ago 2 replies      
> your ability to Google something

In my opinion, this actually is something that makes me valuable. It doesn't matter how well you can synthesize information if you can't find it in the first place.

Having the ability to take a problem, figure out what you don't know, reprocess those parts into a format that Googles well, filter out the noise from the results, and only then synthesize the information gathered is actually not as common as you might think.

5
norswap 12 hours ago 0 replies      
While I sympathize, that would be forgetting all that the net and online-ness has done for me. I would be a fundamentally different (and, in my current estimation, worse) person if I didn't have the net. It's been an engine of personal growth much more than one of distraction.

It might not be the same for everyone, of course. But I still think going offline is giving up too much.

The pendulum doesn't have to swing all the way to the other direction. Couldn't we just focus on being more responsible in our net consumption and promoting the good benefecial stuff instead?

6
webXL 18 minutes ago 0 replies      
Au contraire, mon ami: javascript:window.dispatchEvent(new Event('offline'))
7
paperpunk 14 hours ago 1 reply      
I got stuck into a bad habit of browsing the internet idly anytime when I'm at home and then having to rush to get to work so I've created rules on my router to disable web access in the mornings before work and late at night before bed.

It is quite effective and I suddenly do other things but I do worry that it's a psychological crutch which is just going to make self-control even harder in the long term.

8
numbers 10 hours ago 0 replies      
I love this line:

"Do your research online, but create offline."

A lot of times, I'm working on something and in the zone and then all of a sudden I see an iMessage notification and forget my thoughts almost instantly.

9
groundCode 9 hours ago 0 replies      
Somewhat brilliant in that by forcing me offline I was distraction free in reading the piece. Basically it fostered an environment in which I was more likely to read to the end of the article
10
chrisbolin 6 hours ago 1 reply      
y'all have been very kind. the productivity tips are very helpful. here's my system:

- sit down at my desk with laptop and phone

- disable wifi on my laptop

- turn my phone face-down on the desk, muted, with wifi/data still on

This lets me check if I have any new messages via my phone, but it is a polling system vs an interrupt system. I have to opt in to check. And I am very aware that using my phone looks and feels less productive, so I try to avoid using it too long.

I've been able to be pretty productive (as an software engineer) with this system. I find that I have to reconnect on my laptop about every 30 minutes to do something or another. Of course, every day varies.

11
ptspts 12 hours ago 1 reply      
How is this page implemented for Chrome? It looks like it is using service workers. Is there a tutorial?

EDIT: Tutorial for Chrome here: https://developers.google.com/web/fundamentals/getting-start...

12
patatino 18 hours ago 4 replies      
Well done, I had the urge to google the second most commonly spoken language while reading the article.

I turned my phone in some kind of a "dumb phone":

- Deleted all games, news apps, basically all the apps I don't regulary need

- Turned off email. It's still configured, I turn it on if I need to read an email

- No push notifications at all

Next step: Turn off mobile data for browser and only activate it if I need to read something. I'm just not ready yet!

13
thinbeige 10 hours ago 0 replies      
Super nice idea. I remember when I was very young and the Internet was also young, maybe just two, three years, I experienced something strange. In this time we still used US-Robotics 56k modems to connect to the Internet. When I was offline my computer felt dead. Worthless. Useless. Only when I was online my computer felt right and I felt good.

You have to imagine that I loved my self-built PCs even before the Internet came. I spent so much time with them, upgrading them, spent night and day installing and trying new software, stuff like Sierra and Lucasfilm Adventures, Clipper/dBase, Turbo Pascal, QuarkXPress, Corel Draw, saving for hardware such as PostScript laser printers, AdLib later Soundblaster soundcarfd, SyQuest harddrives, flatbed scanners, all the typical stuff. And once the Internet came an offline computer felt like a dead computer.

14
Unbeliever69 13 hours ago 3 replies      
I am probably one of the few techies in the world that does not own/use a smartphone. Since 2011 I have used a cheap Verizon flip phone for exactly this reason; I want more control over my life. NOTHING is that important that I need to be plugged in 24/7. My wife has a smart phone which is great for when we travel (maps, Yelp, Fandango, Uber, etc.) Don't get me wrong, I was standing in a long line outside the Apple store the day the first iPhone came out. However, over the years I came to realize that in order to have the amount/type of work/life balance I desired, technology would have to take a back seat to my relationships and interests outside of my career.

I haven't looked back.

15
Multicomp 15 hours ago 0 replies      
Using a WP7 device has really forced me to come to terms with all the online cruft and clutter I look at all day. When I only have email, calls and text, I really do seem to see a lot more in life.
16
hondish 4 hours ago 0 replies      
I just created a new location on my MacBook's network settings, called it 'getitdoneland', and removed all the network services from it. Now my laptop has an 'airplane mode'. Thankfully, it takes just enough seconds for connectivity to return when I switch back to my regular location that I think I'll be dissuaded from distraction. Friction is good sometimes.Back to work...
17
Nekobai 15 hours ago 0 replies      
This article seems to discuss similar stuff to The Shallowshttps://www.amazon.co.uk/Shallows-internet-changing-think-re...
18
5_minutes 14 hours ago 0 replies      
On old Nokia phones you could create profiles, like "work" and "weekend" and configure it's possible functions and also distractions.

I figure there's something like that for android but on iOS you can only put on: do not disturb. It works though.

19
ohthehugemanate 6 hours ago 0 replies      
It really does require a lot of discipline to stay focused nowadays. In my personal life I'm terrible at it. I wish I read more, like I did before the Internet are my life.

But at work, I HAVE to be disciplined. I start my day with 1 hour of communications catch-up, including stand-ups and slack. Then I turn off slack, and get to work. My phone is set to do not disturb automatically starting at 10am. I check messages when I'm on a break, going to the bathroom, on lunch, etc... But the messages are never allowed to interrupt me.

Works for me, at least.

20
tenkabuto 12 hours ago 0 replies      
The point about chasing links in articles is interesting to me. One of my favorite activities is loading articles up in Pocket for all-but-offline reading and Pocket's Listen feature, which uses Android's text-to-speech (?), to listen to articles.

The point about articles being written differently according to whether the author expects that the article will be read offline or not interests me, though, especially if a decent amount of background/context provision is outsourced via providing a link to documents that cover such material.

21
bryananderson 11 hours ago 0 replies      
I use an iOS app called Freedom to disconnect. I can block the entire Internet (excluding iMessage and FaceTime) or a list of sites (social media, news, etc) for a period of time. This way I can still contact people, but cannot browse idly.

Is it a crutch? Sure, but crutches work. If you were dealing with alcoholism, the best thing you could do would be to remove your ability to easily access alcohol.

22
lypextin 14 hours ago 1 reply      
to me, using cron to disconnect my internet every half an hour just to remind me to break the loop has been immensely helpful. And annoying. But mostly helpful.

My brain switched to offline mode has about three times better focus.

Similarly to this, I'm using my browser in full-screen mode most of the time to eliminate distractions. It was very surprising to me, how big an effect it has, to not see the tabs.

23
binaryapparatus 15 hours ago 3 replies      
Doesn't work on FreeBSD/firefox? First I put interface down, so no network access, second try I physically pulled cable out. Nothing happens.
24
cableshaft 14 hours ago 0 replies      
My phone bricked a week and a half ago and I've been using a phone I walked into a lake on accident two years ago that still completely works except for cell service, so it only updates when I'm connected to Wifi now. I also have google voice number, so my texting works from that phone as well, but again only where there's Wifi.

I'm going to make a claim, pay a deductible, and get a new proper phone at some point, but I've been a bit lazy and delaying it a bit because it hasn't been too bad going without.

Although I did have one bad experience since it happened (almost immediately after). My car's battery died and it required me walking for almost an hour next to a dangerous street to get to some place that had Wifi and sort out getting my car towed and being able to open Uber to have someone pick me up.

25
hozae 7 hours ago 0 replies      
What is the bounce rate on your page?
26
fizixer 10 hours ago 1 reply      
I would love to work offline. But my work involves constant use of a commercial software that won't run without a license check connection to its central license server.

edit: just occurred to me. I should try to script my connectivity, so the connection is established just before the software is used, and terminated soon after. Looking into it.

27
viach 11 hours ago 0 replies      
Console -> Network -> Offline [x] also works.
28
csomar 12 hours ago 1 reply      
If you are, like me, interested in reading the post but do not want to get offline:

> window.dispatchEvent( new Event( "offline" ) );

29
iapurv 10 hours ago 0 replies      
The irony was that I was unable to upvote this well written post since I was in airplane mode.
30
schnevets 13 hours ago 3 replies      
Totally agree. Unfortunately, 90% of my work is development in a SaaS platform, so getting anything done will require my device to remain online.

Does anyone know of a Chrome Plugin/hack that might block all but a few web pages? Then I can enjoy the silence of working without distractions while still plugging into the application that I'm working with.

31
wenham 14 hours ago 1 reply      
For those that don't want to, and 'miss the point'. View source and go to the .JS file

The text of the off-line site is about a third of the way down.

32
myf01d 13 hours ago 1 reply      
in your console put

> window.dispatchEvent(new Event("offline"))

33
hatsunearu 12 hours ago 1 reply      
Would be great if the website worked. Went offline and nothing happened.
34
KirinDave 10 hours ago 1 reply      
Doesn't work at all for me on Linux Chrome, Linux Firefox, Windows Firefox, or Windows Chrome.

Is it a joke? Or just poor tradecraft?

35
afshinmeh 12 hours ago 0 replies      
Thanks for that `window.dispatchEvent(new Event('offline'))` option though.

I had to be online and read the article at the same time :P

36
kuschku 13 hours ago 0 replies      
In Firefox, just use Alt-F to open the File menu, and check "work offline" to view this page.
37
amelius 15 hours ago 4 replies      
Hmm, in Chromium developer tools, in the Network tab, I set throttling to "Offline", but nothing happened in the page.
38
saikatsg 11 hours ago 0 replies      
Very cool
39
r0fl 13 hours ago 1 reply      
The article won't load for me.
40
nnd 11 hours ago 0 replies      
Don't blame the internet, numerous sources of distractions existed long before. If you are easily distracted, the real cause lies elsewhere.
10
Why is this C++ code faster than my hand-written assembly (2016) stackoverflow.com
423 points by signa11  1 day ago   182 comments top 17
1
abainbridge 1 day ago 4 replies      
A couple of weeks ago I'd never heard of Peter Cordes. Now the linked article is the third time I've seen his work. He's doing a fine job of fixing Stackoverflow's low-level optimization knowledge. Not so long ago all I seemed to find there was people saying something like, "well, you shouldn't optimize that anyway", or, "modern computers are very complex, don't even try to understand what's happening".
2
kazinator 1 day ago 3 replies      
TL; DR: > If you think a 64-bit DIV instruction is a good way to divide by two, then no wonder the compiler's asm output beat your hand-written code.

Once (maybe 25 years ago?) I came across a book on assembly language programming for the Macintosh.

The authors wrote a circle-filling graphic routine which internally calculated the integer square root in assembly language, drawing the circle using the y = sqrt(r * r - x * x) formula!

What is more, the accompanying description of the function in book featured sentences that were boasting about how it draws a big circle in a small amount of time (like a "only" quarter of a second or some eternity of that order) because of the blazing speed of assembly language!

How could the authors not have used, say, MacPaint, and not be aware that circles and ellipses can be drawn instantaneously on the same hardware: fast enough for drag-and-drop interactive resizing?

3
payne92 1 day ago 7 replies      
tl;dr -- the asm author used DIV to divide by a constant 2

More fundamentally: it's theoretically possible to at least match compiled code performance with assembly, because you could just write the code the compiler generates.

BUT, it requires a LOT of experience.

Modern compilers "know" a lot of optimizations (e.g. integer mult by fixed constant --> shifts, adds, and subtracts). Avoiding pipeline stalls requires a lot of tedious register bookkeeping, and modern processors have very complicated execution models.

It's almost always better to start with a compiler-generated critical section and see if there are possible hand optimizations.

4
bluedino 1 day ago 1 reply      
>> Have you examined the assembly code that GCC generates for your C++ program?

A very polite way of saying, "why are you even using assembly, when you don't understand assembly?"

5
AdmiralAsshat 1 day ago 2 replies      
The question was more interesting than the answer.

tl;dr version--the author's hand-written assembly was poor.

I guess the more interesting takeaway is "Just because it's assembly doesn't mean it's good assembly."

6
ericfrederich 1 day ago 0 replies      
For fun I ported the C++ to Python and Cython without any kind of mathematical or programmatic optimizations. C++ was 0.5 seconds, then Python was 5.0 seconds. Cython, which was the same exact code as Python except sprinkled with "cdef long" to declare C types, was just 0.7 seconds.
7
SeanDav 1 day ago 3 replies      
General comment and not aimed at this specific instance:

Just because you are writing in assembler, does not mean it is going to run faster than the same code in a compiled language. There has been decades of research and who knows how many man-years of effort that has gone into producing efficient compiled code from C, C++, Fortran etc.

Your assembly skills have to be of quite a decent order to beat a modern compiler.

BTW: The answer to the question on Stack Overflow by Peter Cordes is a must-read. Brilliant.

8
iamjk 1 day ago 0 replies      
The people who write "article answers" like this on SO are the real MVP's of the web.
9
raphlinus 1 day ago 8 replies      
Apologies if this is somewhat off-topic for the thread, but I suspect this will be a fun puzzle for fans of low-level optimization. The theme is "optimized fizzbuzz".

The classic fizzbuzz will use %3 and %5 operations to test divisibility. As we know from the same source as OP, these are horrifically slow. In addition, the usual approach to fizzbuzz has an annoying duplication, either of the strings or of the predicates.

So, the challenge is, write an optimized fizzbuzz with the following properties: the state for the divisibility testing is a function with a period of 15, which can be calculated in 2 C operations. There are 3 tests for printing, each of the form 'if (...) printf("...");' where each if test is one C operation.

Good luck and have fun!

10
bjoli 1 day ago 1 reply      
I know is it is not the point of the question, but that problem would benefit greatly from memoization. Calculate it recursively and memoize the result of every step. With all the neat trickery that they are doing with assembly they could easily go sub 10ms.

I whipped together a short poc in chezscheme, and it clocks in at about 50ms on my 4 yo laptop.

11
elcapitan 1 day ago 2 replies      
tldr: compiler replaces /2 with a shift.
12
msimpson 1 day ago 0 replies      
> If you think a 64-bit DIV instruction is a good way to divide by two, then no wonder the compiler's asm output beat your hand-written code...

Compilers employ multitudes of optimizations that will go overlooked in hand-written ASM unless you, as the author, are very knowledgeable. End of story.

13
coldcode 1 day ago 2 replies      
When I started programming on a Apple II+ assembly was important. Today there are likely only a few people in the world who truly understand what any particular CPU family is actually doing sufficiently to beat the compiler in some cases, and they probably are the ones writing the optimizer. But 6502 was fun to code for and the tricks were mighty clever but you could understand them.
14
takeda 1 day ago 0 replies      
Not too surprising answer: "your assembly sucks"
15
m3kw9 1 day ago 0 replies      
Because the complier has optimized it better than you.
16
smegel 1 day ago 0 replies      
> but I don't see many ways to optimize my assembly solution further

I can't do it therefore it must be impossible!

17
barrkel 1 day ago 0 replies      
This was a borderline help vampire question, but it ended up working out well, probably for nerd-sniping reasons.
11
Wall Street Banks Warn Downturn Is Coming bloomberg.com
305 points by champagnepapi  15 hours ago   357 comments top 4
1
chatmasta 14 hours ago 28 replies      
The pattern of boom/bust cycles over the past century is alarmingly consistent, especially for a field like economics that is famously unpredictable. Just look at the graph in Exhibit 7 of this article. It's almost perfectly periodic. According to investopedia [0], "there have been 11 business cycles from 1945 to 2009, with the average length of a cycle lasting about 69 months, or a little less than six years." By this logic, we're definitely "due" for a downturn very soon.

Does anyone who understands finance have any insight on why this pattern seems so predictable? Is it due to fundamental economic drivers, or is it merely correlated with major historical events (internet 2000s, globalization 1990s, deregulation 1980s, post-WW2 society 1950s, etc)?

If technological society does not continue to innovate at the pace of the last few decades, will boom/busts smooth out at a point of slower growth?

[0] http://www.investopedia.com/terms/b/businesscycle.asp

2
jseliger 13 hours ago 2 replies      
The nice thing is that if you predict enough downturns you'll eventually be right.

The clich goes, "Economists have predicted nine of the last seven recessions," but I think the numerator is actually higher.

This article: https://www.theatlantic.com/magazine/archive/2008/12/why-wal... was published in 2008 but is still underrated.

3
chollida1 14 hours ago 7 replies      
What I'm most excited to see in the event of a market downturn is how well Betterment and Wealthfront hang onto their clients.

I'm guessing that the average client of those firms hasn't really lived with a significant stock market investment during a bear market. Will these clients keep their money invested in a larger percentage than the typical ETF investor?

If so then I think that's a huge bullish signal for these new types of wealth management firms.

If not, then those companies are going to have to go out and raise money in a downturn.

If you can hold peoples money during a downturn, then I view that as a very positive investment signal, there has to be something more than the dollar, bitcoin, hedge funds, and gold that people can turn to in a downturn.

4
daxfohl 11 hours ago 5 replies      
Figures. At age 41, I just last week went in debt for the first time in my life, moving my family from a home we owned in Michigan to the red-hot Seattle market for a job offer I couldn't turn down, spending well over a million to do so.

It's just starting to dawn on me now how precarious a situation we're in. Going in, it's like "awesome job, a million+ house will be worth 1.2M+ next year, maybe 1.4+ after that." Seemed reasonable.

My cynical side is just waiting for the market to implode now, our loan to be upside-down, and layoffs to start. Everything was so good six months ago. House paid off, a rainy-decade reserve. And now reality is sinking in that we could be homeless jobless and bankrupt in a month. Hopefully not. Hopefully the economy has a couple more years in it. I'd probably not make the same decision again.

12
Apple Scales Back Its Ambitions for a Self-Driving Car nytimes.com
264 points by fmihaila  1 day ago   399 comments top 5
1
tambourine_man 1 day ago 12 replies      
From the beginning, the employees dedicated to Project Titan looked at a wide range of details. That included motorized doors that opened and closed silently. They also studied ways to redesign a car interior without a steering wheel or gas pedals, and they worked on adding virtual or augmented reality into interior displays.

The team also worked on a new light and ranging detection sensor, also known as lidar. Lidar sensors normally protrude from the top of a car like a spinning cone and are essential in driverless cars. Apple, as always focused on clean designs, wanted to do away with the awkward cone.

Apple even looked into reinventing the wheel. A team within Titan investigated the possibility of using spherical wheels round like a globe instead of the traditional, round ones, because spherical wheels could allow the car better lateral movement.

Very interesting, and one heck of a leak if true.

2
iiiggglll 21 hours ago 5 replies      
> Even though Apple had not ironed out many of the basics, like how the autonomous systems would work, a team had already started working on an operating system software called CarOS. There was fierce debate about whether it should be programmed using Swift, Apples own programming language, or the industry standard, C++.

Wow. Few things guarantee success like starting off a project with a good old-fashioned language flamewar!

3
IBM 1 day ago 1 reply      
This is a weirdly titled report which implies it just happened. The "Apple scales back" part was already reported first by Bloomberg last year (which seems to be behind a paywall now) [1]. Bob Mansfield was brought on to refocus Project Titan on the fundamentals (being self-driving) rather that producing a car [2]. But both of these reports have the exact same hedging:

>Apple Inc. has drastically scaled back its automotive ambitions, leading to hundreds of job cuts and a new direction that, for now, no longer includes building its own car, according to people familiar with the project.

>Five people familiar with Apples car project, code-named Titan, discussed with The New York Times the missteps that led the tech giant to move at least for now from creating a self-driving Apple car to creating technology for a car that someone else builds.

And that's because the idea that Apple is going to be an auto parts supplier like Delphi that sells middleware to car companies is completely laughable.

There isn't actually much news in this report. The tidbits that the reporter got clearly motivated writing this article but it doesn't actually live up to its premise. In fact, PAIL seems like an expansion of Apple's efforts from what was previously reported.

[1] https://www.macrumors.com/2016/07/28/apple-car-autonomous-dr...

[2] https://www.bloomberg.com/news/articles/2016-10-17/how-apple...

4
mypalmike 1 day ago 7 replies      
If Apple were truly serious about building self-driving cars, they would buy one of the big 3 US auto manufacturers. It could buy all 3 with cash and still have one of the largest hoards of cash ever accumulated.
5
1_2__4 1 day ago 3 replies      
Can the "mass production self driving cars are just a couple of years away" meme finally die yet? Are we ready to admit that maybe this is a harder thing to invent than we've been trying to make ourselves believe?
13
Hackers nab $500k as Enigma is compromised weeks before its ICO techcrunch.com
305 points by etherti  1 day ago   238 comments top 33
1
che_shirecat 1 day ago 9 replies      
How to make money in ethereum, from high to low risk:

1. Dump the leftovers of your bi-weekly software engineering paycheck into buying ETH, BTC, or whichever altcoin is popular this week. It went up 5000% in the past, it's got to keep growing right?

2. Participate in an ICO and stock up on whatever platform token they're hawking. It's more profitable if you get in early due to some presale mechanism (hopefully here you aren't sending your hard-earned digital currency to a hacker's wallet). Sell these tokens about 4-5 days after the sale closes, before the hype dies down and the bagholders realize they're holding sand.

3. Even more profitable is kicking off your own ICO. Go through the checklist - fancy HTML5 theme that you can buy off of Themeforest and edit the HTML a bit for the landing page, create a Slack channel/Twitter account/subreddit, write a "whitepaper" that is easy enough for the shmucks you're targeting to understand, yet replete with enough pseudo-academic crypto jargon and irrelevant/unnecessary mathematical symbols to get the shmucks nodding their heads and pretending to understand how this particular algorithm/equation based on the "turing-complete ethereum blockchain" will "change the world" or "bank the unbanked" or, more importantly to them, appreciate 500x in value. Don't forget listing the members of your team and advisors, ideally with as much credential signalling as you can - "MIT," "Stanford," "Comp Sci Phd," "McKinsey," all work here, fake it till you make it and make sure you list Vitalik Buterin on your list of advisors just for that extra bit of technical legitimacy. Use centuries-old sales tactics to pitch your ICO - butter up your target audience's sense of superiority by emphasizing exclusivity - they're the only clever ones, they're the genius computer nerds who understand the 1000x potential of your algorithm, they're the ones that are breaking free of the shackles of regulated securities. Create a sense of urgency with a ticking timer on your landing page, a 24-hour window to buy your monopoly money, a subtle/not-so-subtle hint that the earlier you get in, the more you'll make.

4. You could always just put on your black hat and rob these extremely soft targets blind. The simpler the method, it seems, the better. Plus, there's absolutely no risk of ever being held accountable - that's the beauty of anonymous cryptocurrency!

2
crypt1d 1 day ago 3 replies      
This might as well be a scam created by the CEO himself. I mean, who in this 'crypto world' would be stupid enough to use the same, previously compromised, password on all his accounts?

P.S. there was a story on reddit (can't find it now unfortunately) about how the attackers tried to deposit the money to Bittrex but luckily someone alerted them and the exchange froze the account. So there is still some hope that funds will be returned.

3
richardknop 1 day ago 7 replies      
Another ICO, another scam. I am not sure I feel bad for people who are gullible enough to send their hard earned money to these "companies". They have one PDF whitepaper and a generic Wordpress template website with some buzzwords, based in Cayman islands or some other tax haven for money laundering. And expecting to get rich from that.
4
kirualex 1 day ago 1 reply      
How to make quick money in 2017:- Create a startup in the blockchain world- Make an ICO to raise money- Get "hacked"- ...- Profit!
5
jstanley 1 day ago 2 replies      
From what I can tell, the ICO wasn't hacked as such. The ICO customers were just scammed.

Edit: title is better now

6
djhworld 1 day ago 0 replies      
Enigma is building a decentralized, open, secure data marketplace that will change how data is shared, aggregated and monetized to maximize collaboration. Catalyst is our first product and the first application running on our Enigma protocol. Powered by our financial data marketplace, Catalyst empowers users to share and curate data and build profitable, data-driven investment strategies.

So much said that explains so little.

7
CryptoPunk 1 day ago 1 reply      
HackerNews gets an unrepresentative picture of the token market. The only stories that get to the frontpage are the ones concerning hacks. But that's not the whole picture. There are a huge number of token sales happening, and the vast majority are not being hacked. This is certainly newsworthy but it needs to be put into the context of how many token sales occur.
8
JohnKacz 1 day ago 2 replies      
Sorry for the uneducated question here, but I've wondered how those who steal crypto-currency "launder?" their ill-gotten coin. More specifically, how do they not get caught since the blockchain records everything? Take it out really quickly? Move little bits around to obscure ownership in some kind of shell game? Am I just totally off-base with my understanding?
9
tlrobinson 1 day ago 2 replies      
I'm not sure I buy the premise that every service needs its own coin. Surely in most cases it would be better to just use the most widely used, most stable coin?

If the concept of cryptocurrency is going to survive I think there needs to be one or two clear winners to eventually bring some stability to their value.

Of course, not issuing your own coin doesn't leave as much opportunity to get rich quick off a bit of hype.

10
ritarong 1 day ago 1 reply      
It's amazing that this has happened multiple times before and yet people have not learned to be more careful. Greed and FOMO.
11
anovikov 1 day ago 1 reply      
As a Russian proverb goes, "a thief stolen a thief's hat"
12
joosters 1 day ago 1 reply      
It just sped up the inevitable losses from another scammy ICO. In this case, the hackers made the process more efficient.
13
ascendantlogic 1 day ago 1 reply      
This is getting a lot of traction because anything related to crypto evokes really strong responses here, but this was a phishing attack. The fact it happened in the crypto space is largely secondary.

People get phished and get tricked into handing out bank account and credit card details all the time. It's not even newsworthy unless it happens on a large scale. This is only newsworthy because of the fact that it's crypto so people equate this with some sort of deficiency with the technology and/or ecosystem. That's not the case.

14
jacquesm 1 day ago 0 replies      
Ah, the good old 'the hacker did it' story. Never fails. I suspect a pretty large fraction of the 'hacker did it' cases are inside jobs.
15
wickedlogic 1 day ago 0 replies      
User machine, not blockchain, security will continue to be the biggest risk in all these systems.

With gold for example, stealing the physical assets takes effort, resources, time, equipment, etc.

With digital assets, that is not the case... and our current level of system security is not adequate in the slightest. It is a challenge we are still largely ignoring today, but crypto currencies will require it be fixed, or better-risk-managed at any rate.

(not advocating gold over digital, but people continue to hand wave the actual risks)

16
nickbauman 1 day ago 8 replies      
Cryptocurrency is probably doomed because the makers of cryptocurrencies have a fundamental conceptual disconnect with money: what it is, why it works and what it represents. Money only works when you have a powerful state actor enforcing the legality of the transaction. When you try to escape that, you get at best a parallel system that still goes back to the state for help in keeping functioning or a system prone to failure, fraud and speculation. To whit yet another one of these incidents.
17
SirensOfTitan 1 day ago 1 reply      
As with most things security, people tend to be the weakest link in the chain.

This type of issue could be solved in a lot of ways. I think a solution wherein:

1. ICOs use a standard 'escrow' contract wherein ether and coin get held by the contract for 7 days or so before either party can withdraw the opposite pair (where either can back out).

2. Building some standard 'ether address' widget that verifies the type of contract an address is. A user-wallet would usually be a warning sign.

18
option 1 day ago 1 reply      
The ICO concept is fundamentally solid and is more efficient then traditional funding sources.What currently lack is the implementation. Both technological and legal frameworks need lots of work, but I bet it'll happen
19
hathym 1 day ago 1 reply      
ICO is the new ponzi scheme
20
raesene6 1 day ago 0 replies      
This looks like another in a series of ICOs which are not being handled with appropriate security controls.

When people are planning on taking in millions of dollars of investment in an easily traded, easily stolen, digital currency, they've got to expect attention from relatively well funded/motivated attackers.

Unfortunately many of the founders of these ICOs don't seem to be that well setup in this regard as some of the disclosed hacks, including this one, aren't exactly advanced.

21
paultopia 1 day ago 2 replies      
OMG. People, PLEASE STOP WITH THE RANDOM NEW CRYPTOCURRENCIES.
22
OscarTheGrinch 1 day ago 1 reply      
Initial Clown Outwitting
23
EternalData 1 day ago 1 reply      
There's a boom and a bust cycle when it comes to new technologies -- doubtless blockchain will have to go through the buzzsaw just like the early commercial Internet did in the early 2000s.
24
drngdds 1 day ago 1 reply      
Do the victims ever get their money back after these cryptocurrency hacks/scams? I know crypto transactions are irreversible by nature, but do the coins ever get seized by law enforcement and returned to their owner? If not, that seems like a major problem. (I know they got around the DAO hack, but that's a unique case.)
25
tdb7893 1 day ago 5 replies      
I wish that passwords like this stopped being the main form of authentication. I guess I'm not sure what's a better way (I like the physical object + pin of my credit card but that's probably not practical for all Web authentication) but it seems pretty obvious that passwords are broken in their current form unless you use a password manager, which can be a hassle
26
jmilloy 1 day ago 1 reply      
Can we get the title fixed? The editorialized title is misleading/inaccurate. (edit: Thanks, it's fixed now.)
27
tzz 1 day ago 2 replies      
There are a lot of scams on the web, but you don't blame the HTTP protocol. There are a lot of email scams, but you don't blame SMTP. Sad to see the dominant view of this community is against any type of cryptocurrency.
28
NicenJehr 1 day ago 0 replies      
> 3. Weekly password rotation, and daily rotation in the week leading to the token sale

this seems useless

29
EGreg 1 day ago 2 replies      
Question: in light of the SEC decision regarding the DAO, is there any way to do an ICO that doesn't run the risk of later the whole company being shut down for not registering securities? Like maybe opening the company in Crypto Valley, Zug?

Is there a way to do a public offering of tokens? Or does it necessitate all the same reporting that a publicly listed company has?

Could still be worth it! Because the investors control even less of your board than in Snapchat IPO.

30
dmtzz 1 day ago 1 reply      
what does nab mean?
31
throw2016 1 day ago 0 replies      
Its appears crypto currencies have escaped the technical domain and have landed plum into nigerian scam territory.

The crypto currency ecosystem has become toxic and irrational propped up by ignorance, desperation and blind greed

I wonder what arguments will be made to third world countries at the next climate change summit when a large number of our population seem to be squandering electricity without pause in the hope of riches.

The only way any crypto takes off in the world we live in is if some powerful vested interest sees some use for it, at which point all the speculators having spent the better part of the past decade pushing fantasy narratives about freedom etc will sell out every single tall claim made for a dime. Those who do not understand history and in this case economies are condemned to repeat it, and badly.

32
mycosis 1 day ago 1 reply      
we need to put a hold on all cryptocurrency startups until we figure out what the hell is going on
33
senatorobama 1 day ago 1 reply      
So what, just a couple of years worth of work at Google as a SWE.
14
Love it or hate it, truckers say they cant stop listening to public radio current.org
277 points by pesenti  2 days ago   270 comments top 24
1
ourmandave 2 days ago 2 replies      
And I said, Well, so why dont you stop listening? Murphy continued. And he says, I cant, because its the only station that will go on mile after mile and I can pick it up again.

Just drove from IA to MI and it's true. Between 90.3 and 92.7 FM there's always an NPR station waiting.

Or I could just download their app and listen anywhere / anytime.

http://www.npr.org/about/products/npr-one/

2
patrickg_zill 1 day ago 2 replies      
This is a PR piece. It has no correlative relationship to reality.

Source: knowing truckers, having them in my family, extended family, friends of family etc. for 40 years. They all have XM/Sirius at this point. None listen to NPR.

Why were no statistics about truckers and NPR quoted? Because they wouldn't have supported the thrust of the article.

3
lsdafjklsd 2 days ago 5 replies      
my dad is a fox news republican, a few years back we were driving for 5 hours and were listening to npr. he was basically waiting for the overt liberal bias, but instead we listened to a bunch of fascinating stories and interesting shows about a variety of topics. I read somewhere that npr isn't liberally biased, but their fan base is mostly liberal, because liberals tend to prefer news that has no bias. It's also not loud and obnoxious, which they like.
4
codingdave 2 days ago 1 reply      
> its the only station that will go on mile after mile and I can pick it up again.

I can vouch for that -- I just went on a road trip to visit all the state parks in Utah, and believe me, once you get out of Salt Lake, your choices are NPR, country music, or static. I listened to a lot of NPR.

5
sparcpile 2 days ago 0 replies      
Both XM and Sirius before they merged had a big following among truckers. Even after the merger, they still have shows and channels tailored to them. The big reason was that could listen to the same channels across the country.
6
ilamont 2 days ago 6 replies      
Surprised that more truckers haven't discovered podcasts.
7
fourmii 2 days ago 0 replies      
A couple of years ago, I took my family on a road trip starting from Phoenix and ended up in SF. We drove north from Phoenix and hit a few of the amazing parks including Yellowstone and then west to Portland and then south to SF. And public radio was part of the fun for us. Aside from our beloved NPR, I loved how the only stations we could get gave us an insight on who probably lived in the areas we were driving through. I never really liked country music, but on that trip, I grew to actually appreciate it. Not to mention the number of new songs and artists we were able to discover. Some of those songs we now associate with the various places along the way.
8
cool-RR 2 days ago 0 replies      
Today I learned: If you start a sentence with "Love it or hate it," it instills a temporary feeling in the reader that he cares about the thing you were saying.
9
mrmondo 1 day ago 1 reply      
Not strictly related to public radio specifically but still interesting on the topic of broadcast radio (and rather disappointing to say the least) - here in Australia we have a big problem with the (lack of) advancement of broadcast quality, while we have DAB+ broadcasts - the stream audio quality is so bad due to the low bit rate - youre actually better off switching back to FM.

The average Australian DAB+ radio stream is... 24-48Kbps! Yes its generally AAC+ which is about 30% more efficient than MP3 but its not even close to FM sound quality.

10
crispyambulance 1 day ago 1 reply      
Not surprising.

I suspect the reason is that there is a lack of appealing alternatives to NPR on the "conservative" side.

If anyone has ever listened to conservative radio like Rush Limbaugh, you'll know that his slow-wit attempts at humor/satire are cringe-worthy failures unless the listener happens to be a septuagenarian with dementia. The other conservative options are preachers and conspiracy wack-jobs... is there anything else?

11
Dowwie 1 day ago 0 replies      
My family has listened to NPR and WNYC for years. I don't recommend anyone rely on it as an exclusive source for information as it does serve special political interests. Two examples that come to mind are the presidential primaries coverage (it was pro-HRC) and more recently coverage on the Saudi Arabia vs Qatar debacle (conveniently omitting Saudi involvement in global terrorism).
12
rdl 1 day ago 0 replies      
I think truckers are one of the groups of people that really love Sirius/XM satellite radio, too -- uniform programming and coverage across the US, and if you are in your truck all the time, the monthly cost is trivial. Sirius even has special stations dedicated to truckers, as well as most of the ads added by the network being trucker-focused.

(If it were me, I'd go for audiobooks, though.)

13
perpetualcrayon 1 day ago 0 replies      

 "its the only station that will go on mile after mile and I can pick it up again"
Captive audience. In my experience, when I've lived in very rural areas (and a lot of large metro areas), the only TV stations I could ever get over antenna were PBS, but more often than that I could only get FOX.

14
nwatson 2 days ago 2 replies      
I listen to a number of NPR podcasts and perhaps I'm misremembering but it seems there's an uptick in the past few years of interviewing and discussing the lives of just "regular folks", people who work blue-collar jobs, people with conservative religious backgrounds, etc., ... and not in a disparaging way.

But ... Fresh Air with Terry Gross still largely wanders the fields of the cultural left with nary a nod to alternative viewpoints.

15
prevailrob 1 day ago 3 replies      
Being from the UK, I take it NPR is the equivalent of Radio 4? (Albeit on a bigger scale, naturally)
16
ComodoHacker 1 day ago 0 replies      
Compare that to State radio and television in late USSR. It wasn't strictly obligatory. But there wasn't anything else, so you end up watching and listening to it anyway.

Imagine NPR as a giant propaganda machine, powerful and tuned up, but currently working in idle mode (or not?)

17
forapurpose 2 days ago 3 replies      
> Aside from the content, according to Murphy, drivers like NPR for the continuity. They can keep listening to the same programs from state to state.

Why don't truckers use satellite radio? They could listen to the same programs anywhere in the (U.S.? World?).

18
SoulMan 2 days ago 1 reply      
Wonder if AM is covered everywhere throughout the country . Here in India radio in trucks was there for a very short period of time till side loaded cassettes I, CDnd side loaded MP3 took over. I think we never had great "talk" content.Even in 2017 I tune to BBC1 , world service and VOA in my car via internet but I get mocked a lot. Guess no one else does it. People here are used to side loaded MP3 music and not so intellectual Bollywood FM radio.
19
baursak 2 days ago 2 replies      
As much as I like NPR, my politics have slowly drifted left over the years, and it's amazing how differently the same shows and hosts now sound to me through a more critical filter. To see what I mean, I recommend browsing through https://twitter.com/npr_watch (not my account, and I'm not affiliated in any way).
20
bluetwo 2 days ago 0 replies      
Wife used to make fun of me. Now she is adicted.
21
hprotagonist 2 days ago 2 replies      
Looking for a "bubble-free" media venue? Not a bad place to start.
22
0xbear 1 day ago 2 replies      
Used to listen to NPR all the time, but now they just shit on Trump 24x7. It was fun in the beginning, but got old after a few weeks. I switched to audible, where I remain to this day, and will remain for the foreseeable future.
23
eighthnate 1 day ago 0 replies      
It's always entertaining when stories of NPR or Foxnews or anything political comes up.

You always see comments like "my conservative father/mother/brother/friend/etc listened to NPR to find liberal bias and found none".

Or "my liberal father/mother/brother/friend/etc listened to foxnews and couldn't find conservative bias".

If your conservative family member or friend couldn't find anything liberal on NPR or your liberal friend couldn't find anything conservative on foxnews, then they must be hard of hearing and they should have their ears check by a doctor.

People are overcompensating too much or they are embarrassed by what they listen to. It's okay to admit that NPR is liberal. It's okay to admit that foxnews is conservative. It's why they exist. If NPR isn't liberal then it isn't doing its job and not serving their fanbase. If foxnews isn't conservative, then it isn't doing its job and not serving their fanbase.

It is just so obvious what people are trying to do and it is annoying.

Edit: Just a PSA... current was founded by the people who founded NPR or PBS. So I'd take what they have to say, especially about NPR or PBS, with a grain of salt.

"whose members were leaders in founding the PBS and National Public Radio."

https://en.wikipedia.org/wiki/Current_(newspaper)

24
throwaway2016a 2 days ago 16 replies      
I love NPR. My wife makes fun of me because sometimes I sit in the car with the battery / radio on after I park just to hear a story finish.

New Hampshire Public radio has a lot of local news and features too, which I can't hear anywhere else. I'm sure other markets are the same.

You can get it on Alexa too because most of the shows are also podcasts which is also great.

I've also heard a lot from my conservative friends that NPR is too liberal but to be honest, I haven't seen that at all. The more entertainment like programs like "Wait Wait" do sometimes have liberal jokes but I think anyone who can take a joke would laugh at them even if they are conservative.[1]

[1] I'm a Libertarian and thus social liberal and economically conservative which puts me in the great position of being able to laugh at jokes at the expense of either of the two major parties.

15
Sonos: users must accept new privacy policy or devices may cease to function zdnet.com
258 points by ralphm  1 day ago   246 comments top 32
1
justinjlynn 1 day ago 9 replies      
It seems like nobody actually owns anything any more - that we're all just digital serfs living on someone else's land. I really don't know why anyone would willingly make such a deal.
2
cm2187 1 day ago 6 replies      
To be honest I had a pretty average experience with Sonos so far. It is connected with cat6 ethernet, professional switches, with no other connectivity problems on that network (and I tested all cables). I have 3 systems (pay1, play 5 and the sonos amp), and they keep losing track of each others, I have to regroup them regularly. They also struggle with long music tracks (i.e. 1h podcasts off a synology shared drive) and often stall in the middle.

If they brick my devices, I will only be half upset.

3
pantsofhonking 1 day ago 2 replies      
Wow talk about blowing something out of proportion. The new software comes with new terms. If you don't accept the new terms you keep the existing software. Over time, it is possible that current software sill stop working with e.g. some future Pandora API, and you'll have a choice of either updating your software or foregoing that feature.

I have a Sonos in every room of my house and I've owned them since the very first generation. Sonos has been extremely good about updating the software. The current software still works on the very first hardware, with all the functionality save for a single feature, room-correcting equalization, that requires the newer DSP. This company is the gold standard of ongoing software support for consumer goods and this article is trying to spin the situation in just the perfect way to make the Internet commentariat explode.

4
sverige 1 day ago 1 reply      
I hate all smart devices. The TV should just be a TV, the dishwasher should wash dishes, the refrigerator should keep stuff cold, the washing machine and dryer should clean my clothes, and speakers should just produce sounds. I have yet to hear any compelling reason to make these devices dependent on software.
5
MikeGale 1 day ago 3 replies      
I suspect that legislators are so far behind the curve on this, that they'll never protect decent humans from such guys.

Answer: Forget the protection afforded by the state. Protect yourself. Blacklist the scum manufacturers, warn your acquaintances.

What other suggestions.

6
solomatov 1 day ago 1 reply      
This problem should be solved in a legislative way, similar to Europe's GDPR. There should some minimum privacy rights which can't be opted out of and which are protected by government. That's the only viable solution. Markets don't help here.
7
lamecicle 1 day ago 2 replies      
I remember a time when the phrase "if you don't know what the product is, you're the product" made sense.

Now it's, "if you don't kn... oh f*ck it, you're the product!"

8
CaptSpify 1 day ago 0 replies      
I looked at these speakers a few months ago. They look really cool. As soon as I saw that they require phoning home, I said "lolno" and built my own speaker system with RPIs.

I love the idea of smart devices, but only as long as the software is Free and Open. I really don't understand people who think situations like this are acceptable.

9
swiley 1 day ago 1 reply      
If you can't read the source and build the firmware yourself you don't own the device.

It's that simple. Stop putting up with closed non trivial firmware and these sorts of problems go away.

10
jeffehobbs 1 day ago 1 reply      
I'll be honest, I'm just glad to see they are still actually working on their software.
11
jwr 1 day ago 1 reply      
While I don't like this new development, I will drop in one data point: I bought my first Sonos devices more than 11 years ago, and over this time they all received software updates with increasing functionality. Think about it: my first Sonos players were bought in the pre-iPhone times. A lot has changed in the tech world since then.

This is something other manufacturers could learn from. It seems these days most products are launched by marketing teams: fire and forget, the moment the product is out the door, all software development ceases and it never gets updated.

I do hope they reconsider the new privacy policy, though. It's worrying.

12
eveningcoffee 1 day ago 1 reply      
There should be way to fight it back. If this kind of thinking spreads even more it will suffocate our society.
13
mnw21cam 1 day ago 1 reply      
Sale of goods act (and similar consumer protection laws in so many countries around the world)?
14
hkmurakami 1 day ago 0 replies      
This is why I will never want my home to be "smart".

You'll have to pry my physical wall switches and copper wires from the cold, dead hands.

15
allwein 1 day ago 1 reply      
I might understand this if it was for new customers going forward. But I don't understand how they can tell their existing customers this and not expect a lawsuit.
16
pedrocr 1 day ago 4 replies      
Anyone have another suggestion for a pair of wifi speakers that can be assigned to Left/Right to get stereo and can be network streamed to and have another device on the network with a line-in?

I was about to buy 2 Play1's and a Connect to do a stereo install in a room where I don't want to run speaker cable. I had researched wackier home-built solutions and was going to give up and go for the Sonos. Now I'm once again considering wackier home-built stuff like 3 raspberry pi's attached to line-in and two dumb powered speakers.

17
bogomipz 1 day ago 3 replies      
I am sincerely curious and maybe a Sonos owner can offer some feedback. what does Sonos offer me now that a streaming music provider, a smart phone and a portable bluetooth speaker doesn't?

I understand that Sonos can stream to multiple "zones" simultaneously but besides the occasion of a house party how often is this necessary?

This news to me is just another reason for me to never buy one.

18
voidz 1 day ago 4 replies      
The actual solution is simple: stop using these devices.
19
ace_of_spades 20 hours ago 0 replies      
Maybe it's just a naive idea but wouldn't it make sense to have "smart devices" be smart in sofar that they have some local computation power and the ability to communicate near range? Maybe have local orchestrator (e.g., apple tv, google home, what not) dedicated for communicating with companies while the rest stays rather dumb and completely interchangeable. Why is literally every smart device sending back information to another company?? Why aren't Distributed IoT and Smart Objects paradigms more of a thing already?
20
JimRoepcke 1 day ago 0 replies      
Sonos: I am altering the deal. Pray I don't alter it any further.
21
iomotoko 1 day ago 1 reply      
mhm, please excuse me all if this is wrong, but isn't this the exact same way it works with pretty much all of the updates from e.g. Apple and co?

Let's say a new Itunes update comes along, this requires the user to opt into a privacy policy, if there happens to have been a change in said policy since the last update, then accepting the new conditions is required in order to install the update? Same for an update in browsers, iOS, Android, ...

I am not in favour, just confused as to why this specific case is singled out? Especially since not updating critical software (Operating systems, browsers, et cetera) seems to have far more serious consequences than w/ a speaker?

22
avs733 1 day ago 0 replies      
What is provided to the consumer/user in exchange for agreeing to this contract? I assume, because I am becoming increasingly nihilistic about technology, that challenges to this would fail in court. However, this seems to fail both the consideration and the competency and capacity elements of a functional contract.
23
softwaredoug 1 day ago 5 replies      
Is there a use case for Sonos that good Bluetooth speakers don't address more simply?
24
invisible 1 day ago 0 replies      
I'd be interested how those complaining about this "problem" would handle the business decision of how to update your terms when adding software that relies on third party offerings. If they add support for Some Music service, do they make it an awful experience where you have to agree to the terms for each vendor?

In earnest, what is a better approach here?

25
INTPenis 1 day ago 1 reply      
I bought one of those for my gf because I wanted to see if it was any good.

Quick review.

iOS users have to use their spotify app which is lousy.

Google Play users can cast to it, thankfully.

Major positive point is that it uses wifi and supports casting from Google Play. But for my gf who uses iphone she hates it.

Overall we prefer the Marshall bluetooth speakers over Sonos because in an apartment there's rarely a need for wifi casting music.

Edit: Chromecast audio is also a viable alternative. Based on how well my regular Chromecast (video) works for me I assume the audio one is as good.

26
DarkKomunalec 1 day ago 1 reply      
RMS was right again.
27
pm24601 1 day ago 2 replies      
And reason #458 why I am skipping the whole "IoT revolution" in my home.

I consider this motivation for DIY.

28
circa 1 day ago 0 replies      
For some reason I read this as Sophos. That would not be good.
29
thrillgore 1 day ago 1 reply      
I've long considered going to Sonos, but I think i'll stay with my Plex VM and my NAS
30
exabrial 1 day ago 0 replies      
Lawsuit time
31
natch 1 day ago 0 replies      
Maybe there is a Google acquisition looming and this is being dictated by Google. Speculation obviously, but look what happened with Nest.
32
throwaway2016a 1 day ago 2 replies      
As a Sonos (I have close to $2000 worth of products) user this actually doesn't bother me.

To all the people talking about ownership. I find it hard to believe the aux in will cease to work. So worst case is they turn into regular speakers.

What it sounds like is you won't be able to update your firmware. So more likely than not, everything would keep working but random Internet related services (like Spotify Integration) may break over time because, for instance, if Spotify changes their API you won't get the software update to fix it.

And that is why I think it is OK. Software updates over the Internet are always subject to licensing. That is not new and not unique to Sonos.

16
Introducing Network Service Tiers googleblog.com
285 points by ropiku  11 hours ago   172 comments top 27
1
Veratyr 10 hours ago 11 replies      
A long standing complaint of mine is that Cloud egress pricing severely limits the usefulness of compute. If I want to say process some visual effects on a large (1TB) ProRes video, I might spend $1 on the compute but $100 on the egress getting it back.

Unfortunately these changes don't really resolve that problem. "Standard" pricing is a paltry 20% less. That 1TB video egress still costs $80 and for that price I can rent a beefy server with a dedicated gigabit pipe for a month.

Why is "Cloud" bandwidth so damned expensive?

I'd love a "best effort" or "off peak" tier. I imagine Google's pipes are pretty empty when NA is asleep and my batch jobs aren't really going to care.

2
pbbakkum 9 hours ago 3 replies      
A few notes here:

- An unmentioned alternative to this pricing is that GCP has a deal with Cloudflare that gives you a 50% discount to what is now called Premium pricing for traffic that egresses GCP through Cloudflare. This is cheaper for Google because GCP and Cloudflare have a peering arrangement. Of course, you also have to pay Cloudflare for bandwidth.

- This announcement is actually a small price cut compared to existing network egress prices for the 1-10 TiB/month and 150+ TiB/month buckets.

- The biggest advantage of using private networks is often client latency, since packets avoid points of congestion on the open internet. They don't really highlight this, instead showing a chart of throughput to a single client, which only matters for a subset of GCP customers. The throughput chart is also a little bit deceptive because of the y-axis they've chosen.

- Other important things to consider if you're optimizing a website for latency are CDN and where SSL negotiation takes place. For a single small HTTPS request doing SSL negotiation on the network edge can make a pretty big latency difference.

- Interesting number: Google capex (excluding other Alphabet capex) in both 2015 and 2016 was around $10B, at least part of that going to the networking tech discussed in the post. I expect they're continuing to invest in this space.

- A common trend with GCP products is moving away from flat-rate pricing models to models which incentivize users in ways that reflect underlying costs. For example, BigQuery users are priced per-query, which is uncommon for analytical databases. It's possible that network pricing could reflect that in the future. For example, there is probably more slack network capacity at 3am than 8am.

3
brunoTbear 9 hours ago 3 replies      
I quite like the way Google has drawn the map here-since no cables reach from India to Europe, they've split the map there making the paths easier to trace between Asia and NA. https://2.bp.blogspot.com/-QvF57n-55Cs/WZypui8H8zI/AAAAAAAAE...

Compare with the difficulties of https://cloud.google.com/images/locations/edgepoint.png

Elegant and subtle work. Just like the networking.

4
jstapels 11 hours ago 3 replies      
Egress pricing for Google and AWS (sans Lightsail) continues to be one of the biggest price differences between them and smaller hosts such as Linode and DigitalOcean.

I think Google missed an opportunity here. They should have cut the prices more significantly for standard tier (sacrificing performance) to make this more competitive.

Right now Linode's and DO's smallest $5 plan offers 1TB of transfer, which would cost $85.00 on Google's new standard plan.

5
idorosen 11 hours ago 3 replies      
TL;DR: New Standard tier level is hot potato routing while existing (now called Premium) tier is cold potato routing.

https://en.wikipedia.org/wiki/Hot-potato_and_cold-potato_rou...

6
always_good 10 hours ago 3 replies      
This precedent (including CloudFlare's new private routing) doesn't bode well for the public internet.

Imagine the day when everyone has to use private routing and the public internet barely even gets maintained anymore.

Of course, public internet also suffers tragedy of the commons and not much is happening on that front. Like how most people are still behind ISPs that allow their customers to spoof IP addresses. And nobody has reason to give a shit. We're getting pinned between worst of both worlds. It's a shame.

7
breck 11 hours ago 1 reply      
Seeing the map of Google's network makes me appreciate more the impact of undersea cables.

If you're interested in the history of earth-scale networks I recommend this free documentary on Cyrus Field and the heroic struggle to lay the first transatlantic cable: https://www.youtube.com/watch?v=cFKONUBBHQw

8
jloveless 10 hours ago 0 replies      
Google's network (especially w/ BBR[1]) is amazing and this makes the price point more approachable for other use cases (like running your own CDN[2]).

[1] https://cloudplatform.googleblog.com/2017/07/TCP-BBR-congest...[2] https://blog.edgemesh.com/deploy-a-global-private-cdn-on-you...

9
jerkstate 10 hours ago 6 replies      
How is this different from paying more for a fast lane, which net neutrality is supposed to prevent?

Edit: there seems to be a bit of confusion what I'm referring to. I'm referring to the Open Internet Order of 2015 [1] which states:

18. No Paid Prioritization. Paid prioritization occurs when a broadband provider acceptspayment (monetary or otherwise) to manage its network in a way that benefits particular content,applications, services, or devices. To protect against fast lanes, this Order adopts a rule that establishesthat:A person engaged in the provision of broadband Internet access service, insofar as suchperson is so engaged, shall not engage in paid prioritization.

[1] https://transition.fcc.gov/Daily_Releases/Daily_Business/201...

10
xg15 10 hours ago 3 replies      
Not that I'm really surprised, but does that map imply that Google has its own trans-atlantic undersea cables? Is there some more info about that?
11
sitepodmatt 10 hours ago 1 reply      
I suppose this was inevitable, the costs of cold potato routing must be prohibitive, especially if we consider more exotic places, for example riding the GCP network from just a few milliseconds away in Bangkok all the way to a tiny GCP compute instance in London on practically all GCP network (exc first three hops locally). GCP network is awesome, I am surprised we are only see a small pricing reduction for standard offering, perhaps idea is to eventually make it 2-3x price, a premium worth it imo if you consider one would push most bandwidth heavy assets onto edge CDNs anyway.
12
heroic 11 hours ago 3 replies      
> There are at least three independent paths (N+2 redundancy) between any two locations on the Google network, helping ensure that traffic continues to flow between these two locations even in the event of a disruption. As a result, with Premium Tier, your traffic is unaffected by a single fiber cut. In many situations, traffic can flow to and from your application without interruption even with two simultaneous fiber cuts.

What does this mean? N+2 redundancy should mean, that even if both go down, then service will not be affected at all, no?

13
cwt137 11 hours ago 3 replies      
I thought I read an article about an online game company who was doing something similar with their users; trying to get their users on their private network as soon as possible. Does anyone else remember that article on HN?
14
ssijak 11 hours ago 0 replies      
Reading this I just got stumped by how many stacks, layers, hardware, technologies, and knowledge incorporated into all of that those bytes needed to travel so I could read them on a laptop across the globe
15
jedberg 10 hours ago 2 replies      
The most interesting thing to me here is that they can actually deliver a cheaper service by going over the public internet. I would think their private net would be cheaper because they don't have to pay for transit.

I guess transit is still cheaper than maintaining ones own lines...

16
0x27081990 9 hours ago 1 reply      
I thought they were firm supporters of Net Neutrality. Or is this somehow different case?
17
Animats 9 hours ago 2 replies      
Don't sign up for a Google service unless you get contract terms which say they can't terminate you at will and have penalties if they do.
18
gigatexal 4 hours ago 0 replies      
I just think the funny thing is the new feature is instead of taking the hit on price for the current level of service their shiny new feature is standard networking! Woot!!
19
Tepix 7 hours ago 0 replies      
> "Over the last 18 years, we built the worlds largest network, which by some accounts delivers 25-30% of all internet traffic

I think that's way more than enough already, thank you.

20
grandalf 7 hours ago 1 reply      
Ironically, this offering is precisely the argument against network neutrality -- different customers need different QoS guarantees.
21
0xbear 10 hours ago 1 reply      
So now we know why Google's egress was so expensive before. It was the premium offering, and standard wasn't quite ready yet.
22
benbro 10 hours ago 1 reply      
Can I use the new standard tier with all services like cloud storage or only with compute instances?
23
hartator 8 hours ago 0 replies      
It's kind of interesting that after being so against preferred networking via their net neutrality stance, they basically implemented it.
24
unethical_ban 7 hours ago 0 replies      
The title should not have been changed - old version noted the product referenced, Google Cloud Compute.
25
CodeWriter23 9 hours ago 1 reply      
It would be nice if they would say if their pricing is per GB or per TB.

https://cloud.google.com/network-tiers/pricing

26
lowbloodsugar 11 hours ago 2 replies      
So is this like the app engine price hike debacle a few years ago but with "better" messaging? So "Try Network Service Tiers Today" means "Migrate to Standard Tier today to avoid the massive price increases coming soon"?

But fundamentally they just massively underestimated costs and need to find a way to adjust pricing. With app engine it was very conveniently beta, so they used the end of beta for the price hike. For this, they're having to invent a "Premium" and a "Standard" Tier, and hey guess what, everyone has been using "Premium".

My experience so far with Google has been "Use this now, and we'll have a massive price hike later, if we keep it around at all."

27
arekkas 10 hours ago 1 reply      
Ok so lobby against net neutrality but don't give a * in your own network. "Don't be evil", right?
17
The Interface font family rsms.me
283 points by glhaynes  11 hours ago   67 comments top 19
1
DiabloD3 6 hours ago 2 replies      
Dear font authors:

Please screenshot renderings via multiple important renderer, important ex: Apple Safari on a Retina box (highlights weird over-bolding due to their hinting prefs), Chrome and Firefox on Windows (both use Freetype, but custom builds and don't quite match stock), and anything normal on a Linux that doesn't use a hacked up Freetype (ergo Ubuntu is out, so is RHEL/Centos and Fedora).

Also, in both white on black and black on white, because font rendering is non-linear in respect to the 2.2 gamma curve (fun fact: everybody still uses 1.8 gamma for font rendering).

2
jack_jennings 9 hours ago 2 replies      
This is based on Roboto (reusing some outlines directly; not initially acknowledged by the designer on the marketing site), and arguably doesn't tread much new ground either in character or use-case or license. Not convinced there is anything this adds to an already crowded space.
3
thinbeige 9 hours ago 4 replies      
A disproportionate sans serif without any letter spacing. It's free though, so better than nothing.

Edit: Dear downvoters, what I am saying is that you can take any ramdom sans serif, reduce the letter-spacing and you end up with a similar looking font face which might be even more balanced. Despite my criticism, I expressed my high appreciation that the creator offers his work for free. If you disagree let me know why instead of downvoting, maybe I am wrong and missed something.

4
chrissnell 24 minutes ago 0 replies      
I created an AUR package for this if any Arch users want to install the font:

https://aur.archlinux.org/packages/ttf-interface/

This package only installs the OTF version currently.

Let me know if you have any problems installing.

5
gjm11 9 hours ago 1 reply      
> Since this font is very similar to Roboto, glyph outlines from Roboto are indeed being used, mainly as "placeholders" while the glyph set is expanded. The Roboto license can be found in the source directory.

Mainly?

If this is a deliberate near-clone of Roboto, then at the very least there should be some explanation of how it differs and why.

6
AceJohnny2 8 hours ago 4 replies      
Looking at Interface's glyph map, I see that letter-O (O) is slightly wider than number-zero (0). Capital-eye (I) is indistinguishable from small-ell (l), though number-one (1) is distinguishable from both.

What other glyph ambiguities do you look out for on new fonts?

7
j_s 41 minutes ago 0 replies      
I was interested to discover fonts with programming-specific ligatures when they were discussed last month. I haven't experimented enough yet to know how well they work out in the long term.

https://www.hanselman.com/blog/MonospacedProgrammingFontsWit...

8
tgsovlerkhgsel 5 hours ago 2 replies      
Sadly, digits have a different width, so if you have e.g. a right-aligned, increasing numerical value in your UI, the left digit of it will "wiggle around" if the last digit iterates through 0-9, and if you have numerical values, they won't align.

This may be OK for text, but specifically for user interfaces, this is the very first thing I check when considering whether a font is usable.

With a good font, it will be immediately obvious which of these amounts is more, while this font would likely mislead you:

$ 100000

$ 111111

Ironically, Roboto seems to get it right.

9
sagichmal 9 hours ago 0 replies      
It's nearly identical to the new Mac OS system font San Francisco (SF) but with tighter spacing and (subjectively to me) nicer finials and terminals. Looks great.
10
BafS 3 hours ago 0 replies      
Thanks for this font, I really appreciate the mix between Roboto and Helvetica or San Francisco. The cuts of Interface are more horizontals than Roboto (look the S for example) and I find it more readable, rational and beautiful. Good job !
11
fairpx 8 hours ago 1 reply      
Nice font. Would there be a way for you guys to incorporate the font on a platform like Google's webfonts? For our business (http://fairpixels.pro) we are constantly looking for great fonts to use in the UI work we do for software teams. Having a scattered landscape doesn't help.
12
harrygeez 8 hours ago 0 replies      
I've been waiting for a font like this for forever! Finally a good alternative to Apple's San Francisco font.

Amazing job to the author!

13
rcarmo 9 hours ago 1 reply      
I actually came across this yesterday and set it as a system font on my Linux machine, which runs Elementary.

Although I don't have a HIDPI display, it is nicer and (subjectively) more readable than what I've tried before (I still use Fira Code for coding and Fira Mono inside the terminal, but for the UI tried various variations of Fira, Roboto and other sans serif fonts, yet none of them stuck).

14
nkkollaw 4 hours ago 0 replies      
Looks really good.

We really need good open source fonts.

15
fredsted 8 hours ago 0 replies      
This font is really pretty, and the text very readable. Great job.
16
cratermoon 6 hours ago 0 replies      
Upper case I and lower case l look too much alike. Letter O and number 0 need to be more distinguishable.
17
tudorw 8 hours ago 1 reply      
system font for the win...
18
virtuexru 9 hours ago 0 replies      
Very very clean. I love it. The easier read the better imho.
19
wyager 9 hours ago 7 replies      
I'm sure this is like tabs vs spaces for typographists, but why the hell do people make and use sans-serif fonts? Even ignoring aesthetics, the fact that there are horrendous ambiguities (like between I and l) renders these fonts completely inappropriate for computational tasks like copying passwords or secret keys. I have run into this problem multiple times on OS X and iOS, where the password managers use sans serif fonts.
18
UK Government's Payment Infrastructure Is Now Open Source cloudapps.digital
255 points by edent  1 day ago   95 comments top 9
1
Nursie 1 day ago 4 replies      
Still got Google Analytics on the page.

I do not feel that reporting every online interaction I have with my government in the UK, back to a huge corporate in the US, is in any way appropriate. But I can't even get anyone to engage on the issue.

When I tried to raise it I got directed to a helpdesk ticket on a site run by an SV helpdesk-as-a-service company.

I appreciate that gov.uk have done some great stuff getting the UK government online, and their designs and Open Source attitude are refreshing, but this is a a serious privacy issue.

2
chatmasta 1 day ago 1 reply      
I went through a visa application process for the UK over the past few months. The main gov.uk site is a very good website for finding information, well designed, works on mobile, etc. Coming from the US, that was quite refreshing -- there's no equivalent in the US as everything is scattered across 100 different agency websites in 50 states.

However the "business logic" of gov.uk is still sorely lacking. For the actual visa application process and payment, I was bounced around between 4-5 different third party websites handling different aspects of the process. I'm sure further integration with gov.uk is on the roadmap, and it will certainly be nice.

As a new resident of the U.K., though, I have to admit I've been pleasantly surprised and very happy with the gov.uk website so far.

3
robin_reala 1 day ago 1 reply      
If you havent heard of this before theres a good introduction to the project at https://gds.blog.gov.uk/2015/07/23/making-payments-more-conv...
4
sitepodmatt 1 day ago 1 reply      
Every interaction I have with a gov.uk portal is a painful UX disater - most recently passport and driver license, both had a submitting payment stage. I can't imagine anyone saying 'wow look how at the gov.uk got it right' lets use their code, a glorified CMS system with forms and payments bolted on - badly - so badly.

Just rechecked it's still complete crap. They can't support the back button, no post / redirect pattern, confirm form resubmission. https://passportapplication.service.gov.uk/

5
rekado 1 day ago 0 replies      
I'm happy to see that they are using GNU Guix: https://github.com/alphagov?q=guix
6
confounded 1 day ago 0 replies      
There are very few positive comments here, but I think it's fantastic that this progress has been made (even if it's not perfect). I had no idea the sites could be used without JS at all; that's brilliant!
7
camus2 1 day ago 0 replies      
Interesting, if you check out the tech it's mostly Java for the backend and Javascript for the front-end.
8
nepotism2018 1 day ago 4 replies      
9
pyb 1 day ago 3 replies      
This looks more like a manual. Where does it say that the infrastructure is open source ? I didn't see any source code.
19
Initial Hammer2 filesystem implementation dragonflybsd.org
225 points by joeschmoe3  2 days ago   67 comments top 7
1
jitl 2 days ago 7 replies      
Very exciting to see implementation progress on HAMMER2. Some basics about the design:

- This is DragonflyBSD's next-gen filesystem.

- copy-on-write, implying snapshots and such, like ZFS, but snapshots are writable.

- compression and de-duplication, like ZFS

- a clustering system

- extra care to reduce RAM needs, in contrast to ZFS

- extra care to allow pre-allocation of files by writing zeros, something that will make SQL databases easier to run performantly on HAMMER2 than on ZFS

And much more. The design doc is an interesting read, take a look:

https://gitweb.dragonflybsd.org/dragonfly.git/blob_plain/HEA...

2
gigatexal 2 days ago 1 reply      
Dillon, I had always thought was a hack when he forked FreeBSD at 4.x but hes proven to have some novel ideas when it comes to these things and Im looking forward to trying out the production ready Hammer2 FS
3
TallGuyShort 2 days ago 1 reply      
Other than the design doc (which, being BSD, is bound to be the primary source of truth), does anyone know of any tech talks or more visual presentations about the design of Hammer FS? I sure do love being available, but for just starting to wrap your head around an FS architecture, talking through some slides would sure be neat. I'm not immediately seeing much on YouTube...
4
alberth 2 days ago 0 replies      
It's amazing how much work the Dfly has acompkksjed given how few developers there are.

I really hope Dfly gets more adoption and broader use.

5
blue1 2 days ago 1 reply      
Does H2 feature data integrity (checksums etc)? For me that is one of the best features of ZFS
6
pmoriarty 2 days ago 4 replies      
Is there any work to get HAMMER2 on Linux?
7
beastman82 2 days ago 1 reply      
This sounds like a weapon system designed by Stark Industries.
20
Why I haven't jumped ship from Common Lisp to Racket just yet fare.livejournal.com
261 points by networked  1 day ago   89 comments top 11
1
flavio81 1 day ago 6 replies      
The author, a famous and well-liked lisper, is not consider ing portability features. CL is an ANSI standard and code often runs with no changes in many distinct CL implementations/compilers/interpreters.

Also, related to that point: There are many different CL implementations out there that satisfy different use cases, like for example JVM deployment (ABCL), embedded systems (ECL), speed(SBCL), fast compile times (Clozure), pro-level support (LispWorks, ACL), etc. So the same code has a huge amount of options for deployment. It really makes Common Lisp be "write once, run anywhere".

Then speed is also never mentioned. Lisp can be seriously fast; under SBCL it is generally 0.3x to 1x C speed; LispWorks might be faster, and there's a PDF out there called "How to make Lisp go faster than C", so that should give an idea of Lisp speed potential.

CL wasn't created by philosophing about what a programming language should be for many years; CL was basically created by merging Lisps that were already proven in the industry (Maclisp, Zetalisp, etc), already proven to be good for AI, heavy computation, symbolic computation, launching rockets, writing full operating systems, etc.

CL is a "you want it, you got it" programming language. You want to circumvent the garbage collector? need to use GOTOs for a particular function? want to produce better assembly out? need side effects? Multiple inheritance? Want to write an OS? CL will deliver the goods.

In short, i would guess that from a computer scientist or reseacher point of view, Racket is certainly more atttactive, but for the engineer or start-up owner that wants to have killer production systems done in short time, or to create really complex, innovative systems that can be deployed to the real world, Common Lisp ought to be the weapon of choice!

2
mjmein 1 day ago 2 replies      
Racket is a really exciting language, especially with its focus on building small languages to solve problems.

However, where it fails for me is in its lack of interactive development. When I investigated it, there seemed to be no way to actually connect a repl to a running program.

Unlike with common lisp or clojure, with racket if you make changes to your code you have to restart the REPL, which destroys your state.

This was a big disappointment to me, because even python with ipython and autoreload allows for more interactive development.

I suspect that this decision was made because of racket's start as a teaching language, because it is simpler, but way less powerful.

3
jlarocco 1 day ago 1 reply      
I use Common Lisp quite a bit, and I'm just not interested in switching to another Lisp. I've looked at most of them, and haven't seen anything compelling. CL still wins on everything that I care about (performance, portability, libraries, ease of use, books/documentation, etc.).

Even the article's list of areas where Racket is "vastly superior" is questionable, IME. Granted, the author wrote ASDF, so he has a very different perspective than I do on the module system, but in practice nothing on that list has been a problem for me, and a few of them I'd actually consider to be anti-features (like a built-in GUI library).

4
peatmoss 1 day ago 0 replies      
I really like this article, because it manages to be a love letter to both Racket and Common Lisp.
5
farray 1 day ago 1 reply      
Interestingly, I just added a section on Gerbil, the Lisp dialect I have adopted instead of PLT, for many personal reasons.
6
bjoli 23 hours ago 2 replies      
One thing that makes racket shine is it's macro facilities. Syntax case is nice and all that, but Jesus Christ in a chicken basket I wish scheme would have standardised on syntax-parse.

Syntax case vs. syntax parse isn't and will never be close to a fair comparison. Not only is it more powerful, it also provides the users of your macros with proper error messages. It blows both unhygienic and other hygienic macro systems out of the water for anything more complex than very basic macros.

7
laxentasken 20 hours ago 1 reply      
If you do work in CL for a living, may I ask what kind of applications and which area? The reason for my question is that CL (and racket) seems like a very good idea to put some time into but the market for such jobs is dead where I live (sweden). Or those jobs might be held by lispers on a lifetime ...
8
myth_drannon 14 hours ago 0 replies      
For anyone interested in Racket, excellent book/online tutorialshttp://beautifulracket.com/
9
i_feel_great 1 day ago 4 replies      
That Racket module functionality where you can add unit tests right with your code ("module+"), but will get stripped when compiled - that thing is quite magical. Is there another system that has this?
10
zerr 23 hours ago 4 replies      
Anyone using Racket in the wild? (Besides the Racket team)
11
lottin 19 hours ago 3 replies      
> trivial utilities for interactive use from the shell command-line with an "instantaneous" feel

Last time I checked CL images were huge though. Something like 24MB for a "hello world" executable, even bigger with some compilers.

21
A history of branch prediction danluu.com
275 points by darwhy  23 hours ago   61 comments top 14
1
userbinator 21 hours ago 6 replies      
The use of previous branch history and branch address as a "context" for prediction reminds me of the very similar technique used for prediction in arithmetic compression as used in e.g. JBIG2, JPEG2000, etc. --- the goal being that, if an event X happens several times in context C, then whenever context C occurs, the probability of X is more likely.

Also, since modern CPUs internally have many functional units to which operations can be dispatched, I wonder if, in the case that the "confidence" of a branch prediction is not high, "splitting" the execution stream and executing both branches in parallel until the result is known (or one of the branches encounters another branch...), would yield much benefit over predicting and then having to re-execute the other path if the prediction is wrong. I.e. does it take longer to flush the pipeline and restart on the other path at full rate, or to run both paths in parallel at effectively 1/2 the rate until the prediction is known?

2
ufo 13 hours ago 3 replies      
One surprising thing that I discovered recently is that after Haswell, Intel processors got much much better at predicting "interpreter loops", which are basically a while true loop with a very large seemingly unpredictable switch statement. It lead to a dramatic improvement in micro benchmarks and made some traditional optimizations involving computed goto and " indirect threading" obsolete.

Does anyone know how it achieved this?

3
ramshorns 21 hours ago 3 replies      
Very informative. I missed the part about 1500000 BC though a time when our ancestors lived in the branches of trees?

Another beginner-friendly explanation of the effects of branch prediction is this Stack Overflow post which compares a processor to a train:https://stackoverflow.com/questions/11227809/why-is-it-faste...

4
ajkjk 11 hours ago 2 replies      
Is there any system out there that supports branch 'annotations', of a sort, so that the programmer or the compiler can just tell the CPU what the branch behavior is going to be?

Like -- it seems kinda silly for the CPU to do so much work to figure out if a loop is going to be repeated frequently, when the code could just explicitly say "fyi, this branch is going to be taken 99 times out of 100".

Or, if there's a loop that is always taken 3 times and then passed once, that could be expressed explicitly, with a "predict this branch if i%4 != 0" annotation.

5
Sniffnoy 20 hours ago 1 reply      
> PA 8000 (1996): actually implemented as a 3-bit shift register with majority vote

This actually seems interestingly different from the two-bit saturating counter. Like, it's not just a different way of implementing it; you can't realize the saturating counter as a "quotient" of the shift/vote scheme.

6
irishsultan 20 hours ago 1 reply      
I seem to be missing something when the two bit scheme is introduced it's said that it's the same as the one bit scheme except for storing two bits (seems logical), but then the index in the lookup table seems to involve both the branch index (already the case in the one bit scheme) and the branch history (as far as I can see never introduced).
7
legulere 19 hours ago 4 replies      
I really have problems reading this website. You don't have to make a website bloated to make it readable: http://bettermotherfuckingwebsite.com
8
filereaper 22 hours ago 2 replies      
Ryzen has rolled out a Neural-Net based branch predictor, would be curious to see its accuracy compared to the listed approaches.
9
lordnacho 21 hours ago 2 replies      
Top quality article. Now we need one with specifics of how to write code that's aware of this. For instance when do use what compiler hints. Anyone have links or books?
10
seedragons 10 hours ago 2 replies      
Is this correct? "Without branch prediction, we then expect the average instruction to take branch_pct * 1 + non_branch_pct * 20 = 0.8 * 1 + 0.2 * 20 = 0.8 + 4 = 4.8 cycles"

other than branch_pct and non_branch_pct being reversed, this seems to be assuming that 100% of branches are guessed incorrectly. Shouldn't something like 50% be used, to assume a random guess? ie 0.8 * 1 + 0.2 * (0.5 * 20 + 0.8 * 1)=2.96

11
zaptheimpaler 22 hours ago 0 replies      
I love your posts dan. High quality writing, no fluff and bullshit every time :)
12
deepnotderp 12 hours ago 0 replies      
TAGE and perceptron combined are the SOTA right now, right?
13
unkown-unknowns 18 hours ago 0 replies      
Figures 12 and 14 are the same but I think the figure used is only supposed to be like that for figure 14, not for figure 12.

The "two-bit" scheme that fig 12 is for does not have branch history, whereas "two-level adaptive, global" which has fig 14 fits the bill.

14
agumonkey 18 hours ago 0 replies      
Beautiful article. The kind that makes you want to dig deeper in the whole field.
22
HackerNews Grid hackernewsgrid.com
332 points by kubopaper  23 hours ago   111 comments top 41
1
fairpx 16 hours ago 11 replies      
Interesting experiment. My observation: With the thumbnails, the title of each post becomes a less-important-caption. In the case of HN, I think the text-only approach is far better. Product Hunt used to be text only, and frankly, it was a better experience. The moment you introduce images to these types of communities, people will start using that real estate to create flashy-attention-grabbing visuals. Over time, it'll be more about how good a thumbnail looks, rather than the curiosity of a title that lures you into the content.
2
helloworld 22 hours ago 3 replies      
I'm hoping that this makes HN's front page just to see the fun recursion of the site displaying a screenshot of itself. (And I do appreciate the experiment in user experience design, too.)
3
nemoniac 18 hours ago 1 reply      
A huge part of the appeal of the standard HN page for me is the simple, straightforward, sensible headline without the discration of images. The title guidelines and the insistence on adhering to them are a big plus in this regard.
4
AriaMinaei 17 hours ago 7 replies      
Whenever I see something like this, I sigh and wonder, "Why should it be so hard for the average internet user to create a live 'grid of thumbnails' for 'a list of links to webpages'? Why should it take a whole developer to code and deploy an entire website, just for this one use-case?"

Software today is not as "soft" as one would've hoped, fifty years ago. It's not malleable. It's not composable. It's barely reactive.

This is not how it was meant to be.

5
Jonas_ba 19 hours ago 1 reply      
We have done something similar for the hn search at Algolia but flagged it under style -> experimental in the settings panel. It's not a grid layout, but more a refresher to the current design. https://hn.algolia.com
6
grey-area 14 hours ago 2 replies      
You really, really need to update the screenshot for hackernewsgrid.com for infinite recursion, now that you're on the HN home page the screenshot should include a picture of itself.
7
jacquesm 12 hours ago 1 reply      
Nice example of the Droste Effect.

https://en.wikipedia.org/wiki/Droste_effect

8
sidcool 14 hours ago 0 replies      
The internal HN links are going with a double slash.

E.g. https://news.ycombinator.com//item?id=15080693

Resulting in Unknown page.

9
hiisukun 18 hours ago 1 reply      
Thanks for posting this - I quite enjoy browsing hacker news using thumbnails from mobile after trying it. On my laptop, I think I prefer the original homepage. I like to check the news once a day, but some days I'm short of time and use an alternative that cuts down results shown [1].

Overall I'm very happy to have now three good options for checking out what I consider to be a very good source of fuss free tech news and discourse.

[1] hckrnews.com - hopefully it isn't a faux pas to mention a potential competitor to your site in this thread.

10
captainmuon 16 hours ago 0 replies      
That's nice. I thought about using thumbnails to linked sites before, but I wonder about the legal dimension.

What if someone puts something illegal, or copyrighted on one of the linked pages? Does anybody have advice (internationally / US / Germany)?

I'm based in Germany, and here there is strong legal protection for "quoting" excerpts. However, it is often debatable what counts as a quote. German news sites often take a photograph of a screen, instead of a screenshot. There seems to be protection for search engines (e.g. Google Photo Search), but the situation is not clear. There is also no "fair use" or safe harbor like in the US.

I'm especially afraid of cease and desist letters (Abmahnungen) - there is an entire industry of people who crawl the web and find copyrighted images with image recognition. The mean thing is that they don't let you use their tool to check for compliance - I would gladly buy a license for images I accidentially use, or remove them - but it is more profitable for them to send you a letter.

(Rant: I once had a case where someone accidentially printed a copyrighted image on a document and put a scaled down picture of that document on a domain managed by me. The copyrighted image was about 50x50 pixels, mirrored, and black and white, but they had me pay ~800 Euros for it. Funny thing is that they never ever contacted us via the contact email. They didn't care about their client's rights or about selling an image, they wanted to milk us. They sent physical letters to people they thought related to the site, until they grabbed me (the Admin-C of the domain).

Now I heard they are going after people rewteeting or liking copyrighted images - IMHO that is ridiculous, there should be a difference between "including" and "linking to" an image.)

11
thiht 16 hours ago 0 replies      
I don't really see the point since most thumbnails are just a screenshot of illegible text. It doesn't help at all.

I think a better thumbnail system would be to use an actual image of the article (for example, the thumbnail for the page https://www.gobankingrates.com/retirement/1-3-americans-0-sa... would be a crop of https://cdn.gobankingrates.com/wp-content/uploads/2016/03/sh...), or simply a favicon in case there's no image available. Hell, you're experimenting so why not even a carousel of all the images in the article? (moving on mouse hover ideally)

Also the title should not be secondary, below the thumbnail. Maybe it should be over the image in some way?

12
Bobbleoxs 19 hours ago 0 replies      
I definitely clicked a couple more just by looking at the screenshots than reading the plain text index. I wonder if there's psychological lure in graphics. Thank you!
13
aaronhoffman 17 hours ago 1 reply      
I have a "preview" feature on https://www.sizzleanalytics.com/HackerNews that uses OG tags, but I'd much rather use these images. Any way we can work something out?
14
Waterluvian 15 hours ago 0 replies      
If I were to add another feature to HN I would add color coding of how content dense a link is. Sometimes a link is a big long essay, and I might not want to click on it just yet.
15
owens99 20 hours ago 1 reply      
What API did you use for the screenshots?
16
max23_ 17 hours ago 1 reply      
Just notice some thumbnails are showing pop up dialog instead of the site itself.

If Puppeteer[1] is used to screenshot the site, probably need to use the page.click API to close it.

But, one problem with that is you need to know the exact selector name which maybe is not a generic one.

[1] https://github.com/GoogleChrome/puppeteer

17
znpy 20 hours ago 2 replies      
This is awesome, but I would really appreciate if it would use all of my screen estate (i am using a 1920x1200 screen) instead of using only three columns.
18
97-109-107 17 hours ago 1 reply      
19
Vilkku 14 hours ago 0 replies      
Nice. There's a bug, self posts have an extra slash in the url (for example "Show HN: How to discuss with opinionated people using the Socratic method[video]" which is currently on the front page for me).
20
have_faith 18 hours ago 1 reply      
Personally, I don't find the thumbnails add anything and also detract a little from reading the headlines. Not to knock on it as an experiment.

My main UX issue with HN is the comment nesting. Would much prefer less nesting and something akin to 4chan's backlink post referencing.

21
Pavan_ 16 hours ago 0 replies      
There is one more similar site http://richhn.herokuapp.com/which fetches meta tags of hackernews links and shows it's rich preview.
22
myth_buster 12 hours ago 0 replies      
Opportunity for infinite recursion.

http://i.imgur.com/9PGlGOG.jpg

23
hiven 18 hours ago 0 replies      
When I clicked on a link it added an additional slash to the URL, I.ehttps://news.ycombinator.com//item?id=15074526
24
calcifer 8 hours ago 0 replies      
Wow, looks and works perfectly without JS enabled! Really appreciated.
25
djKianoosh 16 hours ago 0 replies      
Usability question.

Is it easier to read, or just otherwise better, if the picture came after the link/heading?

I find it hard to visually read/parse the way they have it now with picture first then title.

26
lowkeyokay 9 hours ago 0 replies      
For the love of God please upvote the this (the OP) so that it will be in the top 9 posts. Then we can all watch a feedback loop in all its pixel glory!

edited to clarify that I'm not in any way asking anyone to update my comment - just the original post

27
technofide 20 hours ago 0 replies      
Would love to know what are you using for the screenshots? Is it urlbox?
28
funvill 11 hours ago 0 replies      
check out https://hckrnews.com/ the "top 10", "top 20", "top 50%" links are great features for me.
29
Meekro 19 hours ago 1 reply      
Several people here are asking how you automatically screenshot websites. Look up PhantomJS -- you don't need to use someone else's API when you can make your own! =)
30
mrleinad 14 hours ago 0 replies      
Awesome, just found my new go-to HN site.
31
ankit84 21 hours ago 0 replies      
Is this a side effect of Puppeteer lib?

https://github.com/GoogleChrome/puppeteer

32
thinbeige 11 hours ago 0 replies      
This is the wrong kind of presentation for the typical HN content. And somebody did the same thing already before.
33
williamle8300 13 hours ago 0 replies      
How'd you get the thumbnails?
34
iorekz 10 hours ago 0 replies      
a bit higher and we can make a hackernews grid inception
35
filipmares 12 hours ago 0 replies      
That site loaded so fast!
36
justbaker 12 hours ago 0 replies      
Nice! I like it :)
37
mindhash 14 hours ago 0 replies      
Try pinterest style
38
senectus1 19 hours ago 1 reply      
looks great, but needs meta info... like post time/comment numbers/points etc...
39
popol12 15 hours ago 0 replies      
I frankly prefer the original, it's more efficient to me.
40
allenleein 19 hours ago 0 replies      
Fantastic one!
41
sunilkumarc 16 hours ago 0 replies      
How are you getting the thumbnail images?
23
Rethinking the D-Bus Message Bus dvdhrm.github.io
227 points by kragniz  14 hours ago   121 comments top 11
1
Animats 10 hours ago 7 replies      
We rather consider a bus a set of distinct peers with no global state.

If they've gone that far, they may as well implement QNX messaging, which is known to work well. QNX has an entire POSIX implementation based on QNX's messaging system, so it's known to work. Plus it does hard real time.

The basic primitives work like a subroutine call. There's MsgSend (send and wait for reply), MsgReceive (wait for a request), and MsgReply (reply to a request). There's also MsgSendPulse (send a message, no reply, no wait) but it's seldom used. Messages are just arrays of bytes; the messaging system has no interest in content. Receivers can tell the process ID of the sender, so they can do security checks. All I/O is done through this mechanism; when you call "write()", the library does a MsgSend.

Services can give their endpoint a pathname, so callers can find them.

The call/reply approach makes the hard cases work right. If the receiver isn't there or has exited, the sender gets an error return. There's a timeout mechanism for sending; in QNX, anything that blocks can have a timeout. If a sender exits while waiting for a reply, that doesn't hurt the receiver. So the "cancellation" problem is solved. If you wan to do something else in a process while waiting for a reply, you can use more threads in the sender. On the receive side, you can have multiple threads taking requests via MsgReceive, handling the requests, and replying via MsgReply, so the system scales.

CPU scheduling is integrated with messaging. On a MsgSend, CPU control is usually transferred from sender to receiver immediately, without a pass through the scheduler. The sending thread blocks and the receiving thread unblocks.

With unidirectional messaging (Mach, etc.) and async systems, it's usually necessary to build some protocol on top of messaging to handle errors. It's easy to get stall situations. ("He didn't call back! He said he'd call back! He promised he'd call back!") There's also a scheduling problem - A sends to B but doesn't block, B unblocks, A waits on a pipe/queue for B and blocks, B sends to A and doesn't block, A unblocks. This usually results in several trips through the scheduler and bad scheduling behavior when there's heavy traffic.

There's years (decades, even) of success behind QNX messaging, yet people keep re-inventing the wheel and coming up with inferior designs.

2
onli 14 hours ago 2 replies      
That sounds reasonable. I'm very surprised. Disabling remote targets, ignoring SELinux, focusing on reliability.

DBus is the one part of the modern linux desktop I would like to/have to install to get the applications I want running, even though I dislike it a lot (pulseaudio and systemd one can just not install). One example is the password remember function of steam. Having a more reasonable implementation could help with this a lot.

3
atemerev 13 hours ago 5 replies      
Dbus is bloated hell. Whoever came with the idea "let's cram all communications from all sources into the single unified data stream, and let the clients fish what they need out of it" had the strange mapping of mental processes, to say the least. Most other forms of IPC are better (more scalable, more elegant, more comprehensible) "everything is a file" is better, actor model is better, and I nearly think that even plain shared memory is better than a common bus.

There is a reason there is no "shared bus" in Internet communications.

4
arca_vorago 10 hours ago 3 replies      
"Linux Only"

I like this approach more and more these days. For example, I run murmur(mumble) servers sometimes, and they deprecated d-bus support for ZeroC ICE (gplv2 or proprietary), but it seems almost as bloated if not more so. The reasoning was mostly around the portability bindings...

Recently though, I have been refusing to support Windows and OSX as a concious decision. One thing I've found is that the constant want/need to target every platform adds an ever-increasing amount of complexity, which really seems to go against the unix philosophy. So I applaud others willing to buck the trend and narrow scope down.

In the end, I think the main problem with the many eyes theory is that code has gotten so complex that there simply aren't enough eyes, and therefore I think the future of software is going to be in reduction of complexity. For example, loc isn't the best measure, but the Minix 3 kernel is at ~20kloc, while the Linux kernel is now at, what ~11mloc!? Not even redhat can audit that shit properly. (another reason we need a Hurd microkernel, but I digress)

5
zokier 13 hours ago 2 replies      
Just noticed that this lives under bus1 github organization; does that imply that eventually it will be using bus1?

Btw, whats happening at bus1, haven't heard about it lately?

6
chme 13 hours ago 1 reply      
So is the dbus-broker the latest project from the kdbus/bus1 guys.

Since from the text dbus-broker does not use the bus1 kernel module, does that mean the bus1 project is dead?

7
throw7 13 hours ago 1 reply      
are you still required to reboot the system if you upgrade "dbus-broker"?
8
baybal2 14 hours ago 2 replies      
As I remember from more than a decade ago, the selling point of DBUS was that they were not trying to design a high performance message bus with sophisticated work mechanisms in spirit of Corba and Bonobo, but a small, flexible, and utilitarian one.

Things like implicit message buffering were deliberate design decisions.

9
revelation 13 hours ago 2 replies      
Ahh yes, we know this all too well, the Linux desktop trap:

iterative work is lame, the old solution is so bad it's not even wrong, here is my idea for a rewrite, look it's even still compatible (for another few minutes).

10
j_s 14 hours ago 1 reply      
I'm sure systemd would be happy to take over responsibility for this functionality. (Sorry, couldn't resist!)
11
digi_owl 4 hours ago 0 replies      
And i see the PR team is already out in force to sell this and defend what has already been sold.

We should really just move to BSD already and let them sink this ship.

24
Studying how Firefox can collect additional data in a privacy-preserving way groups.google.com
272 points by GrayShade  1 day ago   424 comments top 6
1
kannanvijayan 1 day ago 20 replies      
I can do a quick summary of what's being proposed and why. I work in the JS team at Mozilla and deal directly with the problems caused by insufficient data. Please note that I'm speaking for myself here, and not on behalf of Mozilla as a whole.

Tracking down regressions, crashes, and perf issues without good telemetry about how often it's happening and in what context. Issues that might have otherwise taken a few days to resolve with good info, become multi-week efforts at reproduction-of-the-issue with little information.

It simply boils down to the fact that we can't build a better browser without good information on how it's behaving in the wild.

That's the pain point anyway. Mozilla's general mission, however, makes it very difficult to collect detailed data - user privacy is paramount. So we have two major issues that conflict: the need to get better information about how the product is serving users, and the need for users to be secure in their browsing habits.

We also know from history that benevolent intent is not that significant. Organizations change, and intents change, and data that's collected now with good intent can be used with bad intent in the future. So we need to be careful about whatever compromise we choose, to ensure that a change of intent in the future doesn't compromise our original guarantees to the user.

This is a proposed compromise that is being floated. Don't collect URLs, but only top-level+1 domains (e.g. images.google.com), and associate information with that. That lets us know broadly what sites we are seeing problems on, hopefully without compromising the user's privacy too much. Also, the information associated with the site is performance data: the time spent by the longest garbage-collection, paint janks.

This is a difficult compromise to make, which is why I assume it took so long for Mozilla to come around to proposing this. These public outreaches are almost always the last stage of a length internal discussion on whether proposals fit within our mission or not.

I'm not directly involved in this proposal, but I personally think it's necessary, and strikes a reasonable balance between the privacy-for-users and actionable-information-for-developers requirements.

2
Vinnl 1 day ago 6 replies      
Note: "planning" means "reaching out for feedback about".

Also interesting: the method they plan on using for anonymising this: https://en.wikipedia.org/wiki/Differential_privacy#Principle...

If that is not sufficiently anonymous, then please submit the reasoning why to Mozilla.

3
frankmcsherry 1 day ago 0 replies      
As someone familiar with differential privacy, and (somewhat less) with privacy generally, here are some suggestions for Mozilla:

1. Run an opt-out SHIELD study to answer the question: "how many people can find an 'opt-out' button?". That's all. You launch this at people with as much notice as you would plan on doing for RAPPOR, and see if you get a 100% response rate. If you do not, then 100% - whatever you get are going to be collateral damage should you launch DP as opt-out, and you need to own up to saying "well !@#$ them".

2. Implement RAPPOR and then do it OPT-IN. Run three levels of telemetry: (i) default: none, (ii) opt-in: RAPPOR, (iii) opt-in: full reports. Make people want to contribute, rather than trying to yank what they (quite clearly) feel is theirs to keep. Explain how their contribution helps, and that opting-in could be a great non-financial way to contribute. If you give a shit about privacy, work the carrot rather than the stick.

3. Name some technical experts you have consulted. Like, on anything about DP. The tweet stream your intern sent out had several historical and technical errors, and it would scare the shit out of me if they were the one doing this.

4. Name the lifetime epsilon you are considering. If it is 0.1, put in plain language that failing to opt out could disadvantage anyone by 10% on any future transaction in their life.

I think the better experiment that is going on here is the trial run of "we would like to take advantage of privacy tech, but we don't know how". I think there are a lot of people who might like to help you on that (not me), and I hope you have learned about how to do it better.

4
embik 1 day ago 3 replies      
This is ridiculous. I use and recommend Firefox for pure ideological reasons, because frankly, Chrome/Chromium is miles ahead of them.

If they start opt-out tracking using the same approach as Google I do not see any reason to use it nor install it for my friends and family. That's some data for you, Mozilla.

5
huhtenberg 1 day ago 3 replies      
The single largest advantage of Firefox over other browsers is that despite all odds and occasional missteps they managed to respect users' desire for complete privacy.

 For Firefox we want to better understand how people use our product to improve their experience. 
Sure thing. But the fact that they are unhappy that some (many?) people are opting-out from the data collection is merely a sign that they don't want to understand why people are using Firefox in the first place. By opting out from the data collection people effectively tell them over and over again that they don't want for Mozilla "to understand how they use Firefox" or "to improve their experience", not at the expense of their privacy.

No phoning home. No telemetry, no data collection. No "light" version of the same, no "privacy-respecting" what-have-you. No means No. Nada. Zilch. Try and shovel any of that down people's throats and the idea of Firefox as a user's browser will die.

6
kogepathic 1 day ago 3 replies      
> What we plan to do now is run an opt-out SHIELD study [6] to validate our implementation of RAPPOR.

IMHO, this is a bad idea. Many people I know already use Firefox because they're weary to give Google (Chrome) all their data.

Firefox should make this feature opt-in only.

25
Alaskas thawing soils are now pouring carbon dioxide into the air washingtonpost.com
201 points by Mz  9 hours ago   171 comments top 15
1
taberiand 9 hours ago 11 replies      
I wouldn't be surprised if such feedback loops have been excluded from the models of climate change because they paint a picture so dire that no amount of mitigation (if there were any serious attempts at mitigation going on) could save our way of life.

"Sooner and worse than expected" is a phrase I expect to hear with increasing frequency.

2
vwcx 9 hours ago 0 replies      
"The study, based on aircraft measurements of carbon dioxide and methane and tower measurements from Barrow, Alaska, found that from 2012 through 2014, the state emitted the equivalent of 220 million tons of carbon dioxide gas into the atmosphere from biological sources (the figure excludes fossil fuel burning and wildfires).

Thats an amount comparable to all the emissions from the U.S. commercial sector in a single year."

3
dvdhnt 9 hours ago 3 replies      
I don't think this is specific to Alaska - there's something similar going on in Siberia and other places where melting permafrost has the potential to do serious damage.

There are even books (and soon at least one movie) on how restoring wooly mammoth populations can save us.

- http://www.npr.org/2017/07/05/534768716/woolly-breathes-new-...

- https://www.theverge.com/2017/7/27/16050308/woolly-ben-mezri...

4
cropsieboss 8 hours ago 2 replies      
Arctic ice has 1400Gt of carbon locked up as methane. [1] This is equivalent to 1400/10 = 140 years of human 2016 activity. [2]. Methane also has a stronger effect than CO2. If ice starts to melt, we are doomed.

[1]: https://en.wikipedia.org/wiki/Arctic_methane_emissions#Contr...

[2]: https://www.co2.earth/global-co2-emissions

5
trapperkeeper74 6 hours ago 1 reply      
That's probably so however Siberia is contributing much more. The whole of subarctic tundra is at risk for rapid melting and co2 and methane release as trapped ancient organic material decays. There are also other major, imminent issues: the uncertain liability of the ESAS clathrates, zero summer sea ice (arctic ocean heating) and jetstream abnormalities (hence more variable weather day-to-day).

Anyone whom wants actual facts ought to watch Paul Beckwith out of University of Ottawa on YT for detailed updates and analysis on climate change.

https://youtube.com/user/PaulHBeckwith

6
artur_makly 7 hours ago 0 replies      
And if you think that's bad ... you check out what's happening in Siberia : https://www.wired.com/2016/12/global-warming-beneath-permafr...
7
memracom 3 hours ago 1 reply      
Probably the asteroid Apophis, due to hit the Earth in 2029, will cause enough cooling to prevent most clathrate from melting. If that works we have a nearby supply of small asteroids that we can fire at the Earth to deepen the winter effect. It works best if you hit a shallow coastal shelf area with lots of limestone rock. Maybe we will sacrifice the Caribbean?
8
syncopate 8 hours ago 1 reply      
Couldn't one try to at least limit the feedback loop by spreading sulfur over the affected regions by plane? (Simulating a volcanic eruption that reflects sunlight)
9
StevePerkins 9 hours ago 1 reply      
Worth pointing out that this article is over 3 months old.
11
thriftwy 7 hours ago 2 replies      
Launch solar shade!

@

Sea levels drop.

12
crush-n-spread 8 hours ago 1 reply      
The atmospheric carbon situation is not good, and we (as a species) need to come up with actionable geo-engineering solutions. Here is one.

Rainwater hits mountains and dissolves silicate minerals into cations that flow into rivers and then oceans. The oceans naturally uptake carbon from the atmosphere by reacting atmospheric carbon with cations in the water that come from those dissolved silicate minerals. This uptake de-acidifies the oceans and produces food for ocean life; for us to collect all the carbon produced in the USA last year, we would need to crush about 60km^3 of silicate rock (which is in abundance) and spread it along coastlines.

To successful sequester enough carbon to save the ecosystem, this might one of the best options we have. This paper [1] does a good job of explaining what I've touched on here.

[1]http://www.greensand.nl/content/user/1/files/rog20004.pdf

13
EGreg 8 hours ago 6 replies      
Can someone PLEASE tell me why there haven't been more efforts underway to have commnities around the world plant more trees and engage in planned reforestation?

This is as close as you can get to a globally available mechanism for pulling carbon dioxide out of the air (and methane can burn leaving carbon dioxide).

I mean this very seriously. Richard Branson is looking to fund ways to pull Carbon out if the atmosphere. China has developed a way to leave carbon in rock. Meanwhile we have had a way all along - TREES! Those and algae in the oceans.

The "answer" I often hear is that the carbon will eventually be released when the trees burn in forest fires. Well, first of all, what matters is the overall biomass of trees. And secondly even if it didn't, that buys us many decades.

PS: How come this is being so heavily downvoted?

14
pfarnsworth 5 hours ago 1 reply      
What sort of effect does this increased CO2 have on the plant life in the area? Do they benefit from the increased CO2 leading to more plants/trees, etc?
15
cryoshon 7 hours ago 3 replies      
and we can't even get people to agree that there's a problem.

and we have no leadership to guide our response.

and we can't winnow our way out of this alone.

what now?

26
A ship that flips 90 degrees for precise scientific measurements atlasobscura.com
241 points by sohkamyung  19 hours ago   74 comments top 13
1
J-dawg 15 hours ago 2 replies      
I thought the photo of the two basins at 90 degrees to each other was a nice example that it sometimes makes sense to violate the DRY principle.

Someone must have looked at installing a gimballed basin, realised how complicated the custom-made gimbal and plumbing would have to be, and thought "screw it, let's just order a second basin".

Also, someone really needs to put a lid on that soap container!

2
ChuckMcM 9 hours ago 1 reply      
I have always enjoyed articles on this research platform. I imagined at one time that it would make for an interesting tourist site if you could put an observation "dome" at what would become the bottom and viewing galleries along the length. Then tow your tour out somewhere, flip, and let people move up and down looking at various levels of sea life.

At the time, an older engineer pointed out that if you didn't fill the bottom with sea water it didn't flip. So really the tourists would all have to wear scuba gear anyway :-). Which crushed my young dreams of a Captain Nemo style encounter in the tidewaters of the great barrier reef.

3
gerbilly 15 hours ago 4 replies      
>As research budgets shrink, and fewer people go into oceanographic research, its hard to say what is on the horizon for FLIP.

Why is it that budgets everywhere from businesses public/private to government to scientific research are always shrinking?

It's like it's the invariant of our age: well you know, budgets are shrinking...

4
kartan 15 hours ago 0 replies      
The name of the ship is FLIP: FLoating Instrument Platform.Nice acronym work is done there.
5
amingilani 14 hours ago 4 replies      
The article didn't address it so I'll just ask: why?

Why do you need to build a flipping (no pun intended) ship?

Couldn't they have just as easily built a tall floating rig that could be dragged by the same tugboat? Why do they even need to change to boat form? Can't they tug it when it's vertical?

Drilling platforms are stable floating structures aren't they? They consist of a stable floating upper platform.

6
jacquesm 15 hours ago 1 reply      
The stresses on that hull when it is at the 45 degree point during a flip must be immense.
7
ge96 16 hours ago 2 replies      
Hmm

Makes me think like why would they design a hovering rocket crane to lower a Rover autonomously on another planet. There is a reason/it worked, and with this ship hmm, like is that the only solution?

You could have floating bouys that have either neumatic or electromagnetic suspension stabilized... But then again the scale/weight of this thing... I guess the design makes sense as far as being able to operate in shallow waters then go out and do it's thing.

8
mLuby 7 hours ago 0 replies      
FLIP is the most Kerbal ship we have right now.
9
PanMan 11 hours ago 0 replies      
Strange that they don't flip the small boat that's hanging to the side: outboard engines (especially if they are 4 stroke) don't like being not upright. Seems it would be fairly easy to hang it so it could swing?
10
amelius 16 hours ago 2 replies      
Why can't they just use a probe of some kind? Or a small submarine released from the ship?
11
sandworm101 14 hours ago 1 reply      
Similar approach, but private sector and far bigger. 911 operators get calls whenever these ships operate in view of the public.

http://www.telegraph.co.uk/men/the-filter/virals/10583550/Ho...

12
GrumpyNl 15 hours ago 0 replies      
Every once in a while you see footage of technology that really amazes you, this is one of them.
13
JDT 15 hours ago 1 reply      
I would have designed the crew quarters on a kind of gimbal so that they are always upright. Make better use of the space and no need to put everything away (thinking of the soap dispenser on the basin) when changing orientation.
27
Going Multi-Cloud with AWS and GCP: Lessons Learned at Scale metamarkets.com
228 points by jbyers  2 days ago   53 comments top 13
1
nodesocket 2 days ago 4 replies      
One of the biggest benefits of Google Cloud is networking. By default GCE instances in VPC's can communicate with all instances across zones and regions. This is a huge plus.

On AWS, multi region involves setting up VPN and NAT instances. Not rocket science, but wasted brain cycles.

Generally, with GCP setting up clusters that span three regions should provide ample high availability and most users don't need to deal with the multi cloud headaches. KISS. You can even get pretty good latency between regions if you setup North Carolina, South Carolina, and Iowa. Soon West Coast clusters will be possible between Oregon and Los Angels (region coming soon).

2
ad_hominem 1 day ago 2 replies      
If any Google Cloud people are listening I wish you had an equivalent to AWS's Certificate Manager. Provisioning a TLS certificate which automatically renews for eternity (no out-of-band Let's Encrypt renewal process needed) and attaching it to a load balancer is so nice compared to Google Cloud's manual SslCertificate resource creation flow[1].

To a lesser extent, it's also nice registering domains within AWS and setting them to auto renew. Since Google Domains already exists, it would be neat to have this feature right inside Google Cloud.

[1]: https://cloud.google.com/compute/docs/load-balancing/http/ss...

3
vira28 2 days ago 2 replies      
One thing that I liked with GCP is their recommendation for cost saving. I spun up a compute engine for a hobby project and within minutes they gave recommendations to reduce the instance size and how much i can save. I don't think AWS offers something like that. Correct me if I am wrong.
4
manigandham 1 day ago 0 replies      
When it comes to GCP:

- They have Role Based Support plans which offer flat prices per subscribed user which is a much better model. [1]

- Live migration for VMs mean host maintenance and failures are a minor issue, even if all your apps are running on the same machine. It's pretty much magical and when combined with persistent disks, effectively gives you a very reliable "machine" in the cloud. [2]

1. https://cloud.google.com/support/role-based/

2. https://cloud.google.com/compute/docs/instances/live-migrati...

5
azurezyq 2 days ago 0 replies      
One extra point for tracking VM bills:

GCE bills are aggregated across instances. To get more detailed breakdown, you can apply labels to them and the bills will have label information attached in BQ.

Alternatively, you can leverage GCE usage exports here:

https://cloud.google.com/compute/docs/usage-export

Which has per-instance per-day per-item usage data for GCE.

Disclosure: I work for Google Cloud but not on GCE.

6
user5994461 2 days ago 2 replies      
>>> on AWS you have the option of getting dedicated machines which you can use to guarantee no two machines of yours run on the same underlying motherboard, or you can just use the largest instance type of its class (ex: r3.8xlarge) to probably have a whole motherboard to yourself.

Not at all. Major mistake here.

When you buy a dedicated instances on AWS, you reserve an entire server for yourself. All the VMs you buy subsequently will go to that same physical machine.

In effect, your VMs are on the same motherboard and will all die together if the hardware experiences a failure. It's the exact opposite of what you wanted to do!

7
dswalter 2 days ago 2 replies      
If AWS were to go to a per-minute billing cycle, they would be instantly more price-competitive with Google's offering. Or, to put it the other way around, those leftover minutes form a significant chunk of AWS's profit margin.
8
matt_wulfeck 2 days ago 0 replies      
> As we investigated growth strategies outside of a single AZ, we realized a lot of the infrastructure changes we needed to make to accommodate multiple availability zones were the same changes we would need to make to accommodate multiple clouds.

Maybe he author means multiple regions? Multi az is so easy. Everything works. Multi region is much harder.

9
whatsmyhandle 2 days ago 2 replies      
Very nice writeup! A nice, detailed read that was easy to understand.

It seems to focus more on raw infrastructure (EC2 vs GCE) instead of each company's PaaS offerings. Obviously AWS has the front runner lead here, but would be super curious in a comparison of RDS vs. Cloud Spanner for instance.(pun unintentional, but then realized, and left in there)

10
swozey 2 days ago 0 replies      
Great thorough comparison and falls very into line with my experience. Definitely worth the read. Thanks!
11
throwaway0071 2 days ago 0 replies      
Off Topic: it's frustrating that these companies spend quite a lot of time and money learning about the complexities of their infrastructure but when you're interviewing at such companies, you're expected to have answers for everything and a complete strategy for the cloud.

/rant

12
hobolord 1 day ago 0 replies      
Great post! How difficult is it to switch from an AWS EC2 instance to the GCP version?
13
mrg3_2013 2 days ago 0 replies      
Nice post! I will be using it as a reference.
28
All 50 startups from Y Combinators Summer 2017 Demo Day 1 techcrunch.com
231 points by nsparrow  2 days ago   196 comments top 32
1
rudimental 1 day ago 4 replies      
Flock seems like it could help solve some crimes, but at a huge cost - privacy. You can't opt out, unless you live nearby and register with the company. "Residents of monitored neighbourhoods can opt-out of being tracked - but visitors, or people passing through, cannot."

http://www.bbc.com/news/technology-41008141

Do people like law enforcement collecting license plates and keeping them in databases indefinitely? How about random people? (Flock says it deletes data after 30 days, users have it beyond 30 days). What if their users had a database with pictures of faces, not just license plates? Facial recognition seems to be in their roadmap.

For context, it's presumably legal in California to collect licenses plates, and pictures of people's faces, if they're in public spaces.

A quote from the founder on privacy: "We dont want into get into the business of making decisions about privacy and how this technology is used beyond the original use case."

Aren't there better ways to solve the problem(s) this product solves?

2
robinjfisher 1 day ago 4 replies      
As I'm in the industry, my thoughts on some of the recruitment-related companies:

10 BY 10 - I don't get it. Their website says they are a contingency search firm with a value prop of having resumes filtered by people with domain knowledge before submission to clients. Also that they deliberately hit a low acceptance rate on resumes because they try to submit a diverse field of candidates. If the goal is to get the hiring manager the right person, then the best resumes should be submitted without imposing some arbitrary diversity requirement.

70 Million Jobs - love the concept. Big believer that proper recruitment practices can help rehabilitation of offenders.

ShiftDoc - marketplace concept in the healthcare space. Not sure of regulatory environment in the US but in the UK, these models will suffer due to regulatory issues over the status of the workers.

Gustav - I like this model. Offers a platform for smaller agencies to compete with the larger players. Will be interesting to see how sustainable it is given the commoditisation in certain sectors as the 3% take seems very high given pressure on margins in staffing in the US.

3
indescions_2017 1 day ago 3 replies      
On GameLynx:

Gamedev studio targeting mobile device platforms with an emphasis on hardcore, competitive league style play. Backed by "one of the largest game companies in the world."

Honor of Kings, Clash Royale, Vainglory, Hearthstone and Supercell's upcoming Brawl Stars create a crowded marketplace. Interested to hear what will differentiate GameLynx's game? How is it planning to innovate and compete? What is the strategy for capturing market share in China, India and the rest of Asia? What exactly is the "next gen" in mobile eSports?

Revenues that these games generate is astronomical. And at some of the highest margins possible. Supercell hit the $2B per year mark recently. And Tencent's profit last quarter was close to $3B.

Best of luck to the team and can't wait to play!

4
icebraining 1 day ago 4 replies      
Totemic is interesting to me, since one of my close relatives spent five hours on the floor of their house after falling, as she could never remember to carry her cellphone (hard habit to acquire at 90 years old...). And of course, she could have lost consciousness, which would render the cellphone useless.

That said, I wonder how can they have a single device for the whole house, and how it works for harder floors, like ceramic tile, which doesn't make much of a sound. Seems to me that something so sensitive would be triggered by way too many false positives.

5
t0mbstone 1 day ago 3 replies      
I find it interesting that "Honeydue" is in the list, when there are multiple free apps already in existence such as Buxfer and Splitwise that have already been available for years.

Both Buxfer and Splitwise serve the exact same functions as Honeydue (splitting bills/expenses with room mates and/or partners), and both of them are free to use. Sadly, neither app has found a way to be all that profitable after all these years.

Maybe Honeydue will do something to solve the marketing problem that apparently both Buxfer and Splitwise have?

6
eps 1 day ago 5 replies      
Gopher is a poorly chosen name for a platform - https://en.m.wikipedia.org/wiki/Gopher_(protocol)

Dropleaf (Netflix for indie games) - at $10/mo for a 50-game bundle I can't see how this can attract any of the quality games and devs.

7
jelliclesfarm 1 day ago 4 replies      
Modular Science - Outdoor farm robot. They pivoted from lab automation to outdoor farm bots? How?

I applied to YC twice(unsuccefully) with the same idea. And I am a farmer. Just not a PhD from MIT. I would love to know they came up with that number. Really. Would love love love to know how..

8
michaelmwangi 1 day ago 1 reply      
I found airthium [http://www.airthium.com/] to be quite interesting I wonder how the efficiency compares with the liquid metal battery at Ambri http://www.ambri.com
9
lpolovets 1 day ago 1 reply      
I'm surprised this includes non-public stats like MRR metrics. Is that normal for a TechCrunch demo day summary?
10
icebraining 1 day ago 4 replies      
They say Pyka already built a plane, so why do they only show renderings? Seems weird to waste that credibility advantage.
11
yladiz 12 hours ago 0 replies      
How did Guggy get into YC? It's going to become another Yik Yak, where it has a lot of users and growth but no strategy for monetization (users aren't gonna accept ads in their text messages, for example).
12
crummy 1 day ago 1 reply      
The second photo in the article, under Zendar, is actually a screenshot from a video game called Scanner Sombre. I guess this is just a mistake?
13
jnordt 1 day ago 0 replies      
On the startups relating to ground based mobility / autonomous vehicles:

Zendar

Interesting concept! I recently talked to an Engineer from Hella and it seems most of the big automotive suppliers seem to develop some kind of low cost radar units that can then be combined to generate rough point clouds.

Definitely willing to test the tech, if you have some spare units @ Zendar ;-)

MayMobility

If I get the concept right isn't the business modell very similiar to Door2Door, CleverShuttle (even though they are not yet using shuttles) or a number of London competitors ?

I personally agree with the underlying assumption from many of the players in that market that one of the easiest entries for autonomous vehicles designed for urban environments with a max speed of 30 - 40 km/h that only navigate in pre mapped and pre defined areas.

14
goberoi 1 day ago 2 replies      
Did I miss it, or are there no virtual reality startups on this list?

I'm a wee bit surprised: VR is nascent and there aren't enough devices out there to build sustainable businesses shipping content, but I would have expected at least 1 or 2.

15
hn_throwaway_99 1 day ago 0 replies      
Overall, I think this is a really impressive list. It seems like a varied mix, and given criticism from a couple years back that Silicon Valley isn't focusing on "hard" stuff, was cool to see lots of awesome tech (auto-piloting planes! robot vegetable pickers!)
16
probably_wrong 1 day ago 0 replies      
D-ID seems to me quite interesting, if only because their interests are directly opposed to those of several governments.

It's a shame they don't seem to offer a 1-click "buy now" alternative, but I imagine the process might be too time-intensive for that.

17
Lordarminius 1 day ago 0 replies      
Helium seems like a decent idea and I'm proud to see a Nigerian startup represented.

However, to my mind and from my experience, the major issue in the medical records space is how the data obtained from caregiver/patient interaction gets captured in the first instance. Health workers have not embraced typing in any form (and they most likely will not)and traditional way of recording by hand does not digitize readily .

How are you solving this problem ?

18
geewee 1 day ago 0 replies      
Sunu looks really cool - I did a project a while ago where we embedded distance sensors and vibrators in shoes to help the visually impaired navigate objects that were close to the ground (e.g. curbs, stairs) - glad to see someone else doing work in that space.
19
Gargoyle 1 day ago 0 replies      
TechCrunch's top 7 picks of this group of 50:

Pyka, PullRequest, Zendar, Gopher, Modular Science, Escher Reality, Forever Labs.

With the exception of Gopher, I think that's a solid list. I just can't get excited about another company putting apps on top of email, though. Maybe it's just me.

All the others seem to have at least a potential path to strong growth that's pretty obvious.

20
statictype 1 day ago 1 reply      
Verge Sense looks interesting. Is it just a platform for gathering data from sensors and providing insights for facilities? Or do you use their applications to actually do facility management (ie, seat booking, etc...)
21
pfarnsworth 1 day ago 1 reply      
I'm surprised about the stem cell company. Stem cell storage companies are notorious scams. I remember doing research into these when my first child was born.

There's only a small subset of diseases it can actually help, and until you can actually grow organs, it's generally useless, for a lot of money. Also you can generate stem cells from adult blood cells or bone marrow anyway.

What the YC company appears to offer doesn't seem much more than a Silicon Valley Blood Boy, the unproven promise of youth from young blood.

22
0bsidian 1 day ago 1 reply      
Ubiq looks interesting, but I thought this space was already quite crowded.

I wonder if they're going for SMBs / companies that are deploying video conferencing solutions for the first time.

23
esaym 1 day ago 2 replies      
Wish we had a list of what they are all using on the front and back ends.
24
nullandvoid 1 day ago 1 reply      
Anyone got more information on gamelynx? The information given there is extremely vague and the site doesn't offer any clarification either
25
avs733 1 day ago 0 replies      
I'm amused that peergrade's home link to the academic research behind the product 404's
26
lesiva 1 day ago 3 replies      
Do the video recordings of these demos get posted online?
27
contingencies 1 day ago 3 replies      
Feather seems very solid. If you guys want a China partner let me know. It's Ikea's fastest growing market.
28
bluker 1 day ago 3 replies      
Relationship Hero is hilarious to me. We're going to solve the unpredictability of human relationships with a "light-weight" solution. Oh and we assume no responsibility for the advice given or the actions you take.

Luckily no one reads the ToS and it's a huge market - everyone has relationship problems.

So they'll make millions because most of the population isn't rational enough to understand that at a base-level that none of us know shit about all the intricacies of human relationships. Life coaches and psychiatrists included.

Kudos to them on their traction. I'm talking shit but preying on people's insecurities in relationships is an endless market. Can't wait to see their FB ads pop-up right after a Tai Lopez infomercial.

Directly quoted from their TOS:

The Platform enables you to communicate with a Dating Expert for the purpose of getting dating advice, information or any other input, benefit or service (not considered "Counselor Services").The Dating experts are neither our employees nor agents nor representatives. Furthermore, we assume no responsibility for any act, omission or doing of any dating expert.We make no representation or warranty whatsoever as to the willingness or ability of a Dating Expert to give advice.We make no representation or warranty whatsoever as to whether you will find the Dating Experts advice relevant, useful, correct, relevant, satisfactory or suitable to your needs.We do not control the quality of the dating advice and we do not determine whether any Dating Expert is qualified to provide any specific service as well as whether a dating expert is categorized correctly or matched correctly to you.While we may try to do so from time to time, in our sole discretion, you acknowledge that we do not represent to verify, and do not guarantee the verification of, the skills, degrees, qualifications, licensure, certification, credentials, competence or background of any Dating Expert.

29
tanilama 1 day ago 0 replies      
Zendar and Darmiyan look interesting. Good to see startups tackle real problems.
30
elmar 1 day ago 0 replies      
Skyways.com VTOL vehicles looks interesting but not photos.
31
demonshalo 1 day ago 0 replies      
Aside from a selected few, I am not very impressed with said "offering" :/ It feels as if it is all a bunch of recycled stuff. But I guess that's just the cynic in me talking!
32
kornish 1 day ago 6 replies      
Personal favorite snippet from the article:

> With this pedigree, PullRequest has managed to draw interest from 450 teams. Though only a portion of these are actually using the service, PullRequest touts a $136 million annualized revenue run rate.

From Crunchbase: founded May, 2017.

As a disclaimer, I'm no accountant, but that just seems downright deliberately misleading. Their standard plan is $49/mo.

29
Universities are broke lets cut the pointless admin and get back to teaching theguardian.com
167 points by ryan_j_naughton  8 hours ago   161 comments top 25
1
meri_dian 5 hours ago 5 replies      
I work in the administration for a top public US research University. The increase in size of University administration and bureaucracy is due to a number of factors. One is certainly unnecessary employment and over-employment. Not only at high levels, with VP's, Assistant VP's, Assistant Vice VP's, Chancellors, Vice Chancellor's, Executive VP's, Directors of XYZ, etc, but also at low levels where the work done by 3 could realistically be done by 1.

However it's also important to recognize that not all of the runaway growth of University bureaucracy is due to poor management or redundant workers; expansion of IT infrastructure and increased regulatory requirements - especially for public institutions - demand more labor. These are the obvious culprits, but beyond these, because the modern University has become far more than just a place of higher education and has come to resemble a miniature city, it is expected to serve the diverse non-academic needs of tens of thousands of students, in addition to more traditional academic needs. Counseling and advisory services, recreational activities, food service, engagement and diversity programs, ubiquitous computing, etc. all add to the University's bottom line. Universities fear that if they were to stamp their feet and refuse to supply these amenities in the name of keeping down tuition, matriculation rates would decline as students would seek greener pastures elsewhere.

Add to this the fact that Universities receive no penalty from the market for continually increasing their prices. Because student loans are available to service ever increasing tuition costs, and students pretty much need to go to college to succeed in the 21st century, demand for college education is highly inelastic. What economic entity wouldn't raise its prices if it knew demand for its product wouldn't suffer?

In a traditional market, as one supplier increases price, competitors enter the market offering lower prices. This doesn't happen in the market for higher education because the value of a University is largely tied to its prestige, and prestige cannot be easily generated by competitors. We bemoan the high cost of University education then mock the University of Phoenix and similar offerings. Market dynamics are the guilty party here.

2
rfdub 5 hours ago 5 replies      
I work in post secondary administration, so I think I have some perspective here. Part of the problem, at least in the US & Canada (Where I live) is that post secondary institutions are positioning themselves less and less and places to get an education and more and more as places to go for an "experience." Its no longer enough to provide a quality education, universities now are selling themselves on their facilities, their "student life" and all the other intangibles that are secondary to actual education. This leads to all the administrative bloat we're seeing as now that many schools are functioning more like glorified 4 years spas they have to have departments filled with staff to plan events, throw parties, Snapchat sports games, provide "save spaces," etc.

I haven't been in the sector long enough to have a real handle on when or why this shift happened, but from my perspective its the primary driver of the increasing administrative bloat. Schools are competing more on the intangibles, and so they need to invest more into these areas, which means more staff and more overhead.

Personally I think the whole university model isn't long for this world though as there are plenty of ways competency can be signaled apart from a fancy foil-stamped piece of paper and eventually when the costs of university education don't provide a positive return over any reasonable time horizon students are going to start looking for alternatives en masse and the market will innovate to meet that demand.

3
tqi 3 minutes ago 0 replies      
"For instance, before introducing a new procedure they would need to eliminate an old one."

If someone wrote a tech article suggesting that companies should require removal of a line of code for every new line introduced, do you think that article would make it to front page of HN?

4
mnm1 5 hours ago 6 replies      
Yes. This is why I refuse to donate to my alma mater anymore. Tuition has nearly tripled in in fourteen years while they are still teaching the same number of students with roughly the same or fewer full time faculty. There's something seriously wrong with that and this is a huge symptom of it. Until they get their shit together, they need less money coming in, not more. This is supposed to be a nonprofit institution but clearly many people are making big money in this business at the expense of students. The federal loan programs certainly don't help either. Allowing student loans to be discharged in bankruptcy would also lessen this money feast for universities. Alas, no solution looks in sight so I do my part in keeping money away from these money furnaces.
5
Animats 5 hours ago 5 replies      
Stanford is building a new "campus" in Redwood City. 35 acres. 2,700 people on site. None are students. None are faculty. No teaching or research will occur there. It's all administrators.[1] "School of Medicine administration; Stanford Libraries and University Archives; the major administrative units of Business Affairs; Land, Buildings and Real Estate; University Human Resources; Residential & Dining Enterprises; and the Office of Development", says Stanford's FAQ. ("Development" in university-speak means fund-raising, not building construction.)

Now that's management bloat.

Stanford has only 2,180 faculty members.

[1] https://redwoodcity.stanford.edu/

6
slackstation 5 hours ago 1 reply      
The pointless admin is from services given to the students. Universities (that aren't household brand names like the Ivys) compete on services and facilities. And because most students are young and using other people's money (their parents or their future selves) they will choose schools not because they have the best deal educationally but, because they beautiful grounds, newer, swankier dorms and all of the social clubs and facilities for those like sports fields, etc.

It's a market problem with misaligned incentives and payment structures that has slowly grown worse over the past 40 years. No one actually says no because competition favors those that fatten themselves up with attractive but functionally useless things.

It's more like peacock feathers than malice or greed by administrators.

7
bluetwo 5 hours ago 2 replies      
As an adjunct professor running one class per year, I ran the calculation of:

(Amount I'm paid per class / (students in class * cost per credit * credits for class) )

And found I'm paid about 10% of what the students pay for the experience of taking my class. I can't help wonder what happens to the rest of that money.

8
leggomylibro 22 minutes ago 0 replies      
Kind of unrelated, but I am extremely disappointed in how much 'giving back to local communities' has been abandoned by modern universities, in favor of their massive administrative payrolls.

What used to be a core tenant of thought, is now a dollar figure. If you live in the state, you get a discount. End of story, obligations fulfilled, full stop.

But it's not a commitment or a central belief. No university will open its facilities to community members. No stage will be available for public performances, no instruments or machines will be made available for inquiring minds, no local organizations will spend more than a few hours on campus for a field trip.

Providing those things would cost money, sure enough. But they used to be part of a university's mission. Some of the older ones still enshrine the ideal in their mottos. That should mean something. It doesn't.

9
dmix 3 hours ago 2 replies      
Obligatory:

> Pournelle's Iron Law of Bureaucracy states that in any bureaucratic organization there will be two kinds of people:

> First, there will be those who are devoted to the goals of the organization. Examples are dedicated classroom teachers in an educational bureaucracy, many of the engineers and launch technicians and scientists at NASA, even some agricultural scientists and advisors in the former Soviet Union collective farming administration.

> Secondly, there will be those dedicated to the organization itself. Examples are many of the administrators in the education system, many professors of education, many teachers union officials, much of the NASA headquarters staff, etc.

> The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization.

https://www.jerrypournelle.com/ironlaw.htm

10
4bpp 1 hour ago 0 replies      
People are happy to claim they want to downsize university administrations when the appeal is voiced like this, but another kind of editorial that pops up just as frequently in the opinion press later, they will be just as enthusiastically demanding more counseling for students in emotional quandaries, more university-mediated internship opportunities, more officials providing sexual assault prevention training, more recourse to resolve student-advisor disputes in a way that shifts the power balance towards the student, more varied dining options and a plethora of other goodies that can mostly only be realised with more and/or more powerful admin staff.
11
dmix 3 hours ago 0 replies      
In an administrative heavy organization every problem seems to be answered with "how can we add more administrative layers, processes, and backroom deals to satisfy x group (management, media, special interest groups, voters, etc) that something appears to be being done?" rather than asking "looking at all available options what is the best way to resolve this problem (inside and out of government)?". Basically a "When all you have is a hammer everything looks like a nail" type of thing.

This is the difference between group A (admins) and group B (engineers, teachers, etc) in Pournelle's Iron Law of Bureaucracy [1]. The former is often presented as a clever talented group of smart people who savvily work the system in TV shows like House of Cards but I question the utility it really offers the world when the ROI is so often questionable.

I'm curious how much of this modern dysfunction in modern governments (not just in the US) is due to the fact politicians are now almost entirely career politicians who spend their formative years in this insular world. The majority coming from the same private schools and 90% of them with law degrees. Rather than in the past, such as the founding fathers, who were businessmen, writers, and intellectuals first embedded in the real world who then went into public service.

The same analogy applies to Universities with administrators being raised within the system rather than the teaching staff intimately familiar with the front-line realities of the organization.

[1] https://www.jerrypournelle.com/ironlaw.htm

12
forkLding 5 hours ago 5 replies      
I have actually had an idea related to the sort of debt had to be taken by American university students for their academics.

Why not use a mixture of blended learning and Coursera, where students pay not just professors but also industry folk for cutting-edge knowledge but also open up a marketplace for Youtube tutorial people to do physical workshops in their cities that can be charged and move onto a sort of Airbnb for education and workshops? I feel a lot of straight-up learning can be gained and the 3% charged per workshop will go towards scholarships. For people who think you would be paying high prices per lecture/workshop in this model, a university student has already been paying about $50 or more per lecture.

Instead of basically physical/manual admin systems in individual centralized universities, you have decentralized software system managing schedules and booking workshops/lectures.

Just an opinion and thought I've been having, any feedback welcome.

13
reaperducer 5 hours ago 1 reply      
Whether it's a university or a government or any other bureaucracy, the money eventually flows to the top. And the top tier of employees have no incentive to remove their own livelihoods.

Thinning the waste at the top is a great idea, but never gets done unless a bigger bureaucracy makes it happen.

14
norswap 2 hours ago 0 replies      
It seems increasingly likely to me that the whole academic model is on the verge of existential crisis, and headed towards profound transformations. I'm curious to see where this leads. Hopefully someplace better than before. (I shoul mention I'm a PhD student)
15
santaclaus 4 hours ago 0 replies      
Is this true in the US, in general? My alma mater has a circa 9 billion dollar endowment... yet it still sends me sob story letters soliciting donations.
16
cperciva 3 hours ago 0 replies      
About five years ago, the president of my university proudly announced at a Senate meeting that they had appointed a new Director of Sustainability, to help ensure that the university's operations were sustainable.

I asked if, in light of the increasing administration headcount, they thought appointing a Director of Sustainability was financially sustainable.

They clearly didn't get the message; while that particular Director has moved on to other things, there is now an Office of Sustainability with about a dozen people.

17
pragmar 3 hours ago 0 replies      
Probably the best deep dive I've read on the cost of higher ed was published a few years ago by Robert Hiltonsmith. This addresses US universities and colleges as opposed to UK, so the issues may not be one to one and the data is a little dated (published in 2015). Still, a worthwhile read.

http://www.demos.org/publication/pulling-higher-ed-ladder-my...

18
mrslave 3 hours ago 0 replies      
In addition to this article, from the vlog "Plunging Enrollment at Mizzou" <https://www.youtube.com/watch?v=h7CHd-w02lc>:

 They can't pay all of the administrators that they need.
need

Also consider that the bennies at de facto government jobs are also higher than market rates (e.g. contribution to retirement schemes, insane types of paid leave) and it's funded with your taxes or government debt (future taxes).

The FA also addresses the politicization of college education which wasn't my point but interesting nonetheless.

19
taksintikk 1 hour ago 0 replies      
Maybe they can start by getting getting rid of the NCAA (private corporation) siphoning billions from colleges and universities.
20
c517402 4 hours ago 0 replies      
Administrative bloat isn't just a problem at the University level, but across all levels of education in the US.
21
randyrand 2 hours ago 0 replies      
The admin is not useless. It's tasked with gobbling up as much cash for the endowment as possible.
22
scythe 3 hours ago 0 replies      
How?

I've been hearing variations of "universities have too much administration" for at least a decade. I still haven't heard any solutions that aren't ridiculously expensive or already being implemented (and failing). Except, of course, for the solutions like "make it impossible for most of the people who currently attend college to go to college" which are approximately as politically viable as making French fries illegal and which furthermore do not really seem to have any mechanism to fix administrative costs other than "market voodoo".

One thing that tends to come up repeatedly when these discussions do reach a modicum of depth is the persistent gaming of the university rankings system. Universities game the ranking system by e.g. encouraging students to apply who are not likely to get in so that they will appear to be more exclusive. It would be nice to improve university rankings so they can't be so easily gamed, but will this actually make a significant difference? I'm not convinced yet that rankings actually have that large of an influence on university policy, in the first place.

The need for universities to police student behavior is another unfortunate situation. It's my understanding that federal cost-cutting is basically behind the moves on the government's part to implement policy by way of schools' enforcement of student misconduct. An honest politician should be able to fix this by allocating money for the government to do what it should have been doing in the first place, but increasing spending is very hard these days...

23
lspears 4 hours ago 0 replies      
Just use Udacity or Coursera
24
muninn_ 5 hours ago 2 replies      
We can't cut the pointless admin for public schools because we need to comply with government regulations. Not making a judgement call here, but it's there.
25
Overtonwindow 5 hours ago 3 replies      
Extremely unpopular opinion: Let's cut both the extra admin, and excess tenured professors. In fact, the entire tenure system should be completely upended. At my alma mater there were a lot of professors who didn't have enough students sign up for their classes, or their field of study was phased out, or reduced. Instead they sat around doing "research" and when they were forced to teach, quite a few openly expressed disdain for teaching.
30
Show HN: The best time to visit any city
372 points by ignostic  1 day ago   116 comments top 60
1
ygra 21 hours ago 1 reply      
It's a nice idea, for sure. I'm not sure I'd use it much since other considerations apart from weather also exist.

A few things I've noticed:

The search could use some awareness about other names of locations. For example, Mnchen cannot be found, Munich can. Tbingen doesn't seem to exist at all (maybe too small).

For people outside the US (yes, they exist) it'd also be nice to have a site-wide switch to metric. This then won't require you to have two charts of everything either (except snow coverage which doesn't seem to exist in metric).

The legends for the charts look a lot like buttons, which can be a bit confusing at first. Maybe it's better to integrate the legend into the charts, e.g. maybe like http://hypftier.de/temp/2017-08-23_090140.png would also save a bit of space; whitespace currently looks a bit haphazardly applied in general.

The animation of the charts seems a bit pointless, considering that they're all below the fold anyway.

2
andyjsantamaria 1 day ago 5 replies      
This is super interesting idea and something I'd reference a lot. The big question I have is what are your factors for deciding the best time to visit? I'd argue setting up the context is really important because weather is a major factor but it isn't the only one. There are times of year that have cultural significance as well as annual events, etc. I like the safety advisory aspect and the population of travelers. It would be interesting to know what types of groups travel there and when. So for instance, I want to go to Hawaii but not there are going to be tons of kids and I'd like to do it cheaply and I don't care about the weather.

Lastly, I don't recommend bucketing NZ under Australia :)

3
stinos 1 day ago 1 reply      
This is really neat! Only thoughts: it assumes one's definition of 'best' is 'best weather' and that in turn means it matches with what your algorithm decides is most pleasurable (which I think it does a pretty good job at). This is probably ok for most people but e.g. I like to travel to see nature and rare species of plants and whatnot, and that completely changes what 'best' is for me as it makes weather not one of the top things to consider.
4
Symbiote 1 day ago 0 replies      
I was surprised with the cities shown on the map -- nothing from Britain or Ireland, only Odense from Scandinavia, yet five places in Moldova and loads more in Ukraine.

Looking at Copenhagen [1], the Celsius graph is maxing out at 10C -- perhaps it would be neater to show a single graph, with a Fahrenheit scale on the left and Celsius on the right. Or just detect that my browser locale is not en_US, and show Celsius...

Minor thing: metres per second (m/s) is a fairly common wind measurement unit. And it should be km/h, not KPH.

https://championtraveler.com/dates/best-time-to-visit-copenh...

5
jeromesalimao 1 day ago 1 reply      
On the 'Weather in Sydney' page:"The warmest time of year is generally mid-January where highs are regularly around 61.9F (16.6C) with temperatures rarely dropping below 50.3F (10.2C) at night"

I think something is off here. That sounds like our winter weather! I would guess our average summer temp would be closer to 30deg C

6
s_kilk 1 day ago 2 replies      
A tiny bit of feedback...

> June August is slow/unreported season for tourism in Edinburgh, so lodging and other accommodations may cost as much as usual.

This is... odd. In August, Edinburgh has the legendary (Fringe) Festival, a month in which the cities population quadruples, making it easily the most intense month of the year for tourism.

Maybe the dataset requires some manual tweaking?

7
sharkweek 1 day ago 2 replies      
This is great - I think the search functionality could be improved a bit (a lot of results following the one I was looking for that didn't seem related, but the right one did show up first).

I wonder if there would be a benefit of a "community" element as well, as in allowing comments on the pages, to give locals the opportunity to chime in with their advice.

8
codingdave 1 day ago 1 reply      
Weather is just one small piece of people's thought process when planning a vacation. But this tool is clearly all about weather... Perhaps finding a new way to describe it other than "the best time to visit" would avoid people coming down on the tool because they want to talk about more than the weather.
9
Humphrey 20 hours ago 2 replies      
Best time of year is subjective!! All Lonely Planet guide books solve this problems by explaining what each place is like at each time.

Eg, for Yosemite, it describes how during summer it is rediculosy busy & hot, so you might prefer to go during the shoulder months, but then you risk having some of the park closed for snow. So, "best weather is subjective".

Likewise, there are many locations, such as Thailand, where the best time to travel is winter. It's too hot in the summer!!

10
Eiriksmal 1 day ago 2 replies      
I love your weather summaries. Your formula nails it with this one: https://championtraveler.com/dates/best-time-to-visit-san-di...

Most weather summaries seem to miss that early/mid-September is significantly hotter than the traditionally hot months of July and August for most locations.

https://championtraveler.com/dates/best-time-to-visit-louisv...> When can you find snow in Louisville? Weather stations report a bit of annual snow likely to be deepest around March, especially close to early March. Powder (new snow) is most often falls around November 12th.

Seeing powder forecasts for Louisville, KY cracked me up.

11
pkulak 1 day ago 3 replies      
This seems really off:

https://championtraveler.com/dates/best-time-to-visit-portla...

It's says the daily highs in the summer are low 60s and that the driest months are in the middle of the winter.

12
Al-Khwarizmi 1 day ago 1 reply      
Hey, this is really useful! I'll bookmark it and probably use it.

A few bugs/glitches, though:

- The Celsius temperature graph for Barcelona ( https://championtraveler.com/dates/best-time-to-visit-barcel... ) doesn't show temperatures above 25C, so every temperature above that gets cropped to 25. The scale should probably be adapted to the data.

- "The busiest month for tourism in Barcelona, Spain is May, followed by March and March." March and March?

- Maybe this is a problem with your dataset and not the app, but just in case, check the snow graph/data for A Corua: https://championtraveler.com/dates/best-time-to-visit-a-coru... - 108 cm of snow in April? I can guarantee the real average is close to zero :)

- Also in little known tourist destinations (e.g. A Corua, from the last link) tourism in all seasons is reported as "slow or unreported". Which is true, of course (in the best month in A Corua you will see much fewer tourists than in a really bad month in Barcelona). But maybe relative data (tourism related to the average in that city) could make sense?

Keep up the good work!

13
cakedoggie 23 hours ago 0 replies      
So the best time to visit Sydney is almost the entire year, except for the last 2 weeks of January?

> February 5th to January 14th

https://championtraveler.com/dates/best-time-to-visit-sydney...

Love the idea.

14
Zaskoda 1 day ago 0 replies      
Recommendation: Travel for snow related trips (snowboarding/skiing) would largely depend on average snowpack on a mountain at that time of year, which is directly related to the amount of snowfall in that location as well as the temperature (does it get cold enough for artificial snow making). Build the right tool for planning snow trips and it should be easy to monetize.
15
foota 1 day ago 1 reply      
Anyone here might be interested in https://weatherspark.com/
16
hissworks 1 day ago 0 replies      
Really neat application of the data. I'm the Director of Marketing at a mid-sized destination marketing organization (DMO) represented on your map (Aurora, Illinois -- we're actually a terrific place to visit in winter!). Curious to learn more about the variables used to deploy the "best time" pages and get a sense for where you'd like to take this further.
17
sonium 1 day ago 0 replies      
The 'when to travel' could be made dependend on where you live. When I lived in Norway I thought 25C is quite warm. Now that I live in south Germany I think 25C is more intermediate. Also in Spring, temperatures feel a lot higher (since I got used to cold winter weather) then by the end of summer (when I got used to hot weather).
18
flavor8 1 day ago 0 replies      
Many of the temperature charts have an uptick at the extreme right (end of December), and then (if you imagine them wrapping around to January) have a non-continuous drop at the start of January. I think there's something slightly funky there.

I saw at least one celsius chart where the data overflowed the maximum (20C) and the plot mashed against the top of the frame.

You could avoid having to double render the same data (and wasting space) by putting celsius on the right y axis of the farenheit chart.

Once you fix the bugs, pay a UI designer to give all of the pages a refresh. Work some SEO, and see if you can find a way to give search engine bots all of the various city pages (which I assume you dynamically generate). Throw in some hotel/airline-ticket site affiliate links and you should get a nice stream of income from this.

19
kristianc 1 day ago 0 replies      
This is a really interesting idea - the kind of tool that you only realize that you've been waiting for when someone shows you it! As someone who likes to travel quite a bit, this will replace a lot of Google searches for me potentially. Your weather summaries will also be great for SEO and are very well optimised.

One bit of feedback I'd give is it'd be worth populating with some "temperate" defaults (i.e. normal average temperature, normal humidity etc.). At the moment it seems like it might take a bit of configuring to get to the information you'd want, when it'd seem like you could take an educated guess.

UI/UX wise I would also make the "date" slider a bit more prominent and maybe have it simply limited to monthly averages (doesn't seem to matter too much if we're talking 2nd or 3rd week of March) for instance.

Otherwise, really love it, and excited to see the ways people are using Tableau for this kind of thing :)

20
mule76 1 day ago 0 replies      
Pretty cool job.

It would be nice if the user had a few sliders to toggle (heat, humidity, rain, and crowds), rated say 0 to 10 (with 5 meaning don't care), to get around having to select an ideal temperature for everyone. Some people want sun, others want snow, and others don't care about either.

21
otterpro 1 day ago 1 reply      
Nice website, as I had always been looking for such travel website like this.

On the travel-weather-map site, I searched for "San Francisco" and the first result was Argentina, and second was Costa Rica. I hope that the result is based on popularity and not based on alphabetical order.

22
wjan 20 hours ago 0 replies      
Cool! I've built similar tool somewhere around 2011. I travel a lot and wanted to have a tool that would help me plan my travels according to specific months. I use it on regular basis since then and it's publicly available at:http://weatherhopper.com
23
kristofferR 1 day ago 0 replies      
Great site, the UI/design isn't the best though. Improving that will make a huge difference. One of the first things I would do is to make Fahrenheit/Celcius a sitewide setting, like on WeatherSpark.com
24
wyldfire 1 day ago 2 replies      
> Tourism graph is based on Google searches for services used by tourists relative to the rest of the year.

That's a pretty clever way to go about it. I hope/suppose that the sampling bias isn't correlated with time-of-year somehow.

25
eam 1 day ago 0 replies      
Very interesting to start. I know for a fact that if you travel to Seattle in Jan/Feb it's really cheap, but of course that's because it will be raining a lot. Rain, for some people is not a real problem, but price is. With that in mind it would be cool to find out when is the best time to travel somewhere the cheapest. That would be really useful for me and probably others.
26
chenster 18 hours ago 0 replies      
The data can essentially be reduced to a single list on one page because the only thing that is really useful is the range of dates for the best time to travel for each location.

Better yet, add the option to sort the list by dates.

27
swampthing 1 day ago 0 replies      
This is a great idea, and something I always wished existed - kudos! Seems like there's some great opportunities for SEO and advertising here as well - keep rocking!
28
neelkadia 22 hours ago 0 replies      
Nice one! Just a thought, apart from atmospheric information; you can add Festival into account. Ex. Let's say Holi in India occurs during March, Kite Festival occurs during January. You can add such thing by just combining few places with Wiki Festivals!
29
Steeeve 1 day ago 0 replies      
I don't know if that Tableau map is going to scale to your desired traffic levels. It's a great enterprise tool, and yeah there are a few publications that leverage it, but my instincts tell me to avoid it on a mid-to-high traffic public page. I could be wrong. It's a difficult graph to replicate under fire, but not so hard that a day or two spent on it wouldn't produce a more performant version.
30
rconti 1 day ago 0 replies      
Cool idea! I just got back from Austria and was surprised about the rain. Didn't even think to pack a rain coat. Even though I checked the weather beforehand, I figured it was a minor fluke. So hard for a West Coast US person to understand that it rains some places in the summer.
31
clishem 1 day ago 1 reply      
32
carbocation 1 day ago 0 replies      
Immediately after pageload (without interaction), I get the following on Chrome 64 bit (60.0.3112) on Ubuntu 16.10:

An unexpected error occurred. If you continue to receive this error please contact your Tableau Server Administrator.

Session ID: 7EBED2C1927841ED9575329CE40EB6F7-0:0

Uncaught TypeError: Cannot read property 'refreshImages' of null

33
hammock 1 day ago 0 replies      
Great idea, mediocre execution, someone else here please get with OP and help him! This is an awesome tool!
34
zzleeper 1 day ago 0 replies      
Following the feedback from others, a few things seem odd (for Peru):

- I tried it with Lima-Peru and it actually suggested a bad time to visit the city: https://championtraveler.com/dates/best-time-to-visit-lima-p... . The weather on winter is quite humid ( http://www.generaccion.com/noticia/imagenes/grandes/188862-2... )so its not the best time to be there, compared to the summer.- Also, on the mountains the season to avoid is the rain season, because it really rains (jan-feb).- Finally, as others pointed, the choice of locations is a bit odd. There are a few smaller towns but for instance Cuzco or Macchu Picchu are not there

35
chis 1 day ago 0 replies      
I bet you could do this by looking at all the Yelp reviews for an area over time. Some combination of average review score and total review quantity would produce a legit metric.

But Yelp and Google's APIs aren't really designed for this kind of use sadly.

36
tmaly 1 day ago 0 replies      
I like this idea a lot, but there is also on thing to consider, Crowds. The best weather also draws the biggest crowds. This drives up food and hotel prices. If you could somehow factor in the crowds and peak season, you could have a really useful tool.
37
madman2890 1 day ago 0 replies      
I'm stoked you built this, as I've had the idea and desired the platform for a while. I do a ton of last minute travel and have semi-unlimited options so it's hard to filter down to the best options that offer what I'm looking for.
38
pitaj 22 hours ago 0 replies      
It would be great if this included a few more metrics besides weather, like popular tourist times (some people might want to avoid lines and such).
39
hellofunk 17 hours ago 0 replies      
The New York advice certainly is wrong. Hot humid summer is not New York's best experience.
40
api_or_ipa 1 day ago 0 replies      
Its funny, it got Vancouver, BC exactly opposite. I'm sure most resident of the notoriously wet city believe July 1-Aug 27 is the best time to visit and not Aug 27-Jul 1
41
vimota 1 day ago 0 replies      
I would love a feature of this where it supports ranges of weeks or for the whole year. I want to use it to find the best place to live based on my preference of yearly weather!
42
xapxap 18 hours ago 0 replies      
Cool I had a similar idea a month ago when looking for a place to go camping. To find the best weather forecast in a range of $x hours drive.Or the best Weather forecast near direct connected airports from my towns airport.
43
mikekij 1 day ago 0 replies      
Great work! Although it says the best time to visit San Diego is February through November. I guess that's right though; it's great here any time!
44
averageweather 1 day ago 0 replies      
Maybe we can partner up :). I made http://www.averageweather.io for a very similar reason.
45
reustle 1 day ago 0 replies      
I did something similar as a little weekend hack a few years ago: http://whengo.io
46
csommers 1 day ago 0 replies      
I think recos like this need the local input more than anything else, ex: see a ton of the comments below pointing out issues.

Good start though.

47
hackonit 1 day ago 0 replies      
I had made something similar in 2013 but was not nearly as complete. Good work. Trekweather.com
48
helloworld 1 day ago 0 replies      
The best times to visit San Diego for ideal weather areFebruary to November

Yeah, that's been my impression, too! :-)

49
corybrown 1 day ago 0 replies      
This is cool, where do you get the data? I see its from NOAA, but what kind of data files do you get from them?
50
fwx 1 day ago 1 reply      
This is very cool. Long shot but any chance you can blog about the algorithm used? Or open source the same?
51
rocky1138 1 day ago 0 replies      
This is really cool, but why is everything in Fahrenheit? How do I make it default to Celsius?
52
pacomerh 1 day ago 0 replies      
"The best times to visit Ensenada for ideal weather areJanuary 1st to December 30th"

nice, all year

53
lukasm 1 day ago 1 reply      
Time to visit Australia is all year long, but what I hear the best is November or February.
54
skdjksjdksjdk 1 day ago 1 reply      
How do you get number of Google searches programatically? Is there an API or something?
55
nsnick 1 day ago 0 replies      
This tool doesnt know about winter destinations. Enter Salt Lake City or Denver.
56
blondie9x 1 day ago 0 replies      
How does climate change impact your data and site?
57
combinationy 1 day ago 1 reply      
Would be nice to be able to select degrees celcius.
58
adamzerner 1 day ago 0 replies      
How about looking at average flight prices?
59
oriettaxx 1 day ago 0 replies      
cool (C would be appreciated)
60
horsecaptin 1 day ago 2 replies      
No results found for Osaka :(
       cached 24 August 2017 04:11:01 GMT  :  recaching 4h 28m