hacker news with inline top comments    .. more ..    17 Sep 2017 News
home   ask   best   2 years ago   
NixOps Declarative cloud provisioning and deployment with NixOS nixos.org
95 points by mrkgnao  3 hours ago   24 comments top 5
mrkgnao 43 minutes ago 1 reply      
In view of some replies downthread: yes, application-level management is entirely possible with Nix(OS).

Currently, this works best for Haskell, for which there is a sizeable amount (relatively speaking) of Nix tooling available. This is largely because a lot of Nix(OS) users also use Haskell, perhaps because lend themselves to very similar modes of "declarative" thinking.

This is a thorough, excellent guide on how to develop Haskell projects with Nix, including managing non-Haskell dependencies (zlib, SDL, ncurses, and so on):


I only started using NixOS about a month ago, and it's fantastic. The best part is that binary caches are available for almost everything, so the hitherto-painful step where one compiles Haskell dependencies locally has been completely eliminated. The best way I can describe how I work now is "Python virtualenvs, but for everything at once and just so much better".

PS. Gabriel Gonzalez is, in general, someone who makes great documenation. Check out the tutorial for his stream-processing library Pipes: the ASCII art alone is awesome :)


zbentley 1 hour ago 2 replies      
Other than integration with the rest of the Nix/NixOS ecosystem, how is NixOps different from Puppet?

Reading briefly through the examples, it seems very oriented towards provisioning machine instances rather than provisioning behavior across networks or multi-machine services, which is something that Puppet really excels at. What features does this offer in addition to ones that Puppet provides?

That said, I've been getting tired of Puppet recently for some unrelated reasons, so I'll definitely give this a try.

zeisss 1 hour ago 6 replies      
Is there good explanation anywhere of the syntax of those files? I tried getting into nix multiple times and was always put off by the lack of understandability of this.
pierrebeaucamp 1 hour ago 1 reply      
I would love if someone could compare NixOps with BOSH.
djsumdog 2 hours ago 2 replies      
Seems neat, better than Terraform, although limited to just nix and a few providers. Hopefully that will grow. I've been working on a provisioning tool myself. Provisioning is hard, and supporting multiple APIs can be really difficult.
Our Approach to Privacy apple.com
70 points by fjk  2 hours ago   44 comments top 5
maxpert 37 minutes ago 2 replies      
At least one company is "trying" keep my photos private. The other day Google Photo told my wife it had prepared an album for our trip to SFO. We were surprised because she already disabled Geo-tagging but whataya know... Google still figured it out!
quadrangle 33 minutes ago 1 reply      
Apple does so many things well, actually. I would return to them for some things and even recommend them (versus exclusively GNU/Linux and LineageOS) if they'd only change the stupid iOS terms that prohibit GPL software.
mtgx 1 hour ago 2 replies      
> While we do back up iMessage and SMS messages for your convenience using iCloud Backup, you can turn it off whenever you want.

Wouldn't they be able to hold that privacy promise much better if they actually allowed people to keep iCloud backup on let's say for pictures, but still be able to disable iMessage messages? I think many people would like to use iCloud but without it backing up personal conversations, too.

Also, iMessage's end-to-end encryption was rather flawed last Matthew Green checked, compared to other end-to-end messaging apps.


As for their use of differential privacy, when they introduced that it was essentially a hidden way of gather more of your data than before, not less, but while still being able to say "hey, we may gather more data than ever on you starting with the new iOS, but it's pretty private, so it's cool, don't worry about it".

All of that said, I know Apple is still miles ahead of Google on privacy. If anything, over the last 1-2 years, Google has become increasingly bolder and more shameless about tracking users without them realizing (except in the EU, where they are forced to make it a little easier for users to understand how they are being tracked, and even that happened because of the anti-trust lawsuit).

Here's just one example of Google's increasingly privacy-hostile behavior:


aub3bhat 1 hour ago 3 replies      
On Android you can Sideload VPN apps, iOS on the otherhand banned them in China. Apple mounted the most successful attack on General Computing with it's walled garden. The whole Apple is good for privacy is a marketing stunt.
whathaschanged 26 minutes ago 2 replies      
Apple's approach to privacy also includes being a partner in PRISM, a fact which they chose to vigorously deny as false allegations until it was proven to be true.

Every story about Apple and privacy chooses to omit thus huge piece of info.

Why should anybody trust them now? What has changed to make anybody believe they aren't still lying about privacy?

A simple heap memory allocator in about 230 lines of C github.com
100 points by ingve  6 hours ago   60 comments top 11
katastic 2 hours ago 1 reply      
I remember years ago my friend was gong to college for CSCI and had to write a malloc for a course. He procrastinated till the last... night... and wrote it. In the free() routine, he simply wrote "return true;". The TA's unit-tested code because there were over a hundred classmates to test. Well, the unit-tests must not have been very good because he said he scored a 100.
pm215 5 hours ago 2 replies      
Coincidentally, the libc malloc/free implementation in 7th Edition Unix is also about 200 lines of C: http://www.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/libc/ge...
huhtenberg 5 hours ago 4 replies      
It's not a "simple" allocator. It's an overly simplistic and largely unoptimized one.

E.g. try and make it work well with 2 threads. Now try and make it work well with 2 threads, one - allocating, another - freeing, etc.

Writing a memory allocator is a _fantastic_ exercise in data structures and optimization. It's also an easy one, so making a reasonably fast allocator from scratch is not that hard, but in the end it's much more fun to write one than to look at someone else's results. This is also what makes these toy allocators to be a dime a dozen.

masklinn 3 hours ago 0 replies      
> In order to initialize this heap, a section of memory must be provided. In this repository, that memory is supplied by malloc (yes, allocating a heap via heap). In an OS setting some pages would need to be mapped and supplied to the heap (one scenario).

brk/sbrk is a bit of a pain but would it really be more complicated to use mmap?

samblr 1 hour ago 0 replies      
..sometime ago had come across [ jemalloc, tcmalloc ] memory allocators whilst solving a performance limitation of platform (in a multithreaded environment) - I think it was jemalloc which gave real boost in performance. Worth trying them too.
saurik 2 hours ago 2 replies      
I was extremely happy to see this as I have been trying to find a single-threaded memory allocator (one which doesn't have any reliance on any form of threading primitives that I simply don't have on the platform I am dealing with, which I point out to make clear that I am not looking for this due to any delusion about performance), but since this doesn't have a license on the code it is essentially useless :(.
bjourne 4 hours ago 1 reply      
I suspect that some of the functions for iterating the linked lists are a big drag on performance. Such as: https://github.com/CCareaga/heap_allocator/blob/master/llist... An optimization that most memory allocators use are to instead store the nodes in a balanced tree, such as a red-black one. Or even better, a B-tree. That turns those functions from O(n) to O(log n).
joelthelion 5 hours ago 2 replies      
Could you use this as a malloc/free replacement or is it missing something?
Keyframe 4 hours ago 0 replies      
senatorobama 4 hours ago 3 replies      
What's the state of the art in memory allocation?
pjc50 4 hours ago 2 replies      
The implementation of malloc in K&R is about 20 lines...
Finding UX in the Trash f2.svbtle.com
47 points by 0xF2  3 hours ago   30 comments top 8
foobarbecue 0 minutes ago 0 replies      
You guys should see the trash sorting at McMurdo station, Antarctica. When I first traveled there in 2009, there were about 20 categories, and dorms had an area just like this one with about 10 of those categories present, each with a page of documentation. If your category wasn't there, you needed to go find that category somewhere else. The categories overlapped and changed from year to year and had names like "burnables" and "paper towels." Users were encouraged to call the Waste Department for support, and actually the people at Waste seemed to understand and believe in the system (unlike me) so I called them frequently to ask where to put e.g. an empty juice box. To be fair, this system was borne out of the real difficulties of waste management in Antarctica, and I'm not sure I could have done a better job with the UX.
cptskippy 1 minute ago 0 replies      
I really hate these articles that bemoan poor UX but in reality are a failure of the author to take the time and look at the problem from a different perspective.

Sure the Mac printer selection experience is great when you have a laptop that you carry from work to home and print from a single printer at either location.

It sucks when you have a desktop at work and you can see the entire corporate network. That printer sitting across the hall might be 10 hops from you on the network but the printer on some exec's desktop might be 2 hops even though he's 2 floors up.

Or maybe you're a home user who never takes your laptop anywhere but you have a document printer and a photo printer. Or a kid and they have a printer for school work in their room. Then you have to check every time you print.

sdbrown 30 minutes ago 0 replies      
Trash sorting is the absolute worst UX. A newspaper goes in paper recycling, fine. Does a glossy magazine also go in paper recycling? Cardboard goes in paper recycling. Does paper cardboard with a plastic overcoat go in paper recycling? Can TetraPaks or whatever the heck Zico/juice boxes/etc. uses be recycled as "paper"? What about steel cans (which most coconut water in southern California is packed in), do they go in "cans" recycling, since there is no "Aluminum"-specific stream in my workplace?

Trash UX is really awful. In my office it's even worse - the custodial staff just dump the at-desks blue recycling bins into the same large trash can as the black waste bins. Why the heck do we even have recycling bins if it's just going to be commingled anyway?

xg15 2 hours ago 2 replies      
The paradigm works well as long as the designer has a clear idea of what the user wants to do and as long as the designer('s boss') and the user's goals are aligned - and as long as the designer could anticipate all use-cases.

Nothing is more infuriating than finding out that something that should be straight-forward to do is hard or even impossible because the option got taken away in the name of UX.

Actually, there is one thing more infuriating: If that UX was also inspired by business goals and not user interests.

tyingq 2 hours ago 2 replies      
The article doesn't propose a solution for the trash problem.

I suspect because it's not really a UI problem. UI can't magically make all complex interactions easy. Fixing the problem would mean changing the requirements first... then the UI.

Edit: Never mind...missed the key sentence there in all the discussion about the 4 labels.

ccleary00 1 hour ago 0 replies      
Many people do not understand UX and its role, but in my experience UX teams do not really understand how to operate within organizations and adapt to processes either.
amelius 2 hours ago 3 replies      
The recycling industry should develop automated ways to separate garbage appropriately, in a central location; perhaps using machine learning. Then you eliminate the UX altogether. The best UX is no UX.
0verc00ked 2 hours ago 0 replies      
I literally just found some UX in the trash.
Every Nintendo Switch appears to contain a hidden copy of NES Golf arstechnica.com
63 points by Tomte  6 hours ago   2 comments top 2
vanattab 53 minutes ago 0 replies      
I loved playing golf as a kid. I think it influenced almost every major golf game that followed.
proksoup 57 minutes ago 0 replies      
The game you play when you retire, get the lowest score.
The Minsky circle algorithm nbickford.wordpress.com
147 points by fanf2  14 hours ago   21 comments top 9
userbinator 10 hours ago 1 reply      
The pair of equations can be expressed succinctly as

 y -= x >> 4; x += y >> 4;
and is basically a very rough approximation of a step in the CORDIC[1] algorithm, which computes the sin and cos of an angle by rotating a vector; it's then quite natural that successive rotations would trace out an approximation to a circle.

As such, many of the first programs of the PDP-1 were Display hacks: Programs using only a few lines of code, but when run, create intricate patterns on the screen.

The demoscene has carried on that tradition, and in particular the sub-512b categories produce very interesting graphics from tiny programs.

[1] https://en.wikipedia.org/wiki/CORDIC

Iv 9 hours ago 0 replies      
I think that the key lies in the fact that we are creating a numerical solution to a differential equation: y = y - 1/16x actually means that we add an amount to y that is dependent on the value in x, i mathematical terms, it means that the derivative (approximated here) of y is equal to deltat(1/16)x

Assuming deltat=16, we have:

y' = -x

x' = y

Which is solved with y=cos and x=sin or with x=-cos and y=sin.

The fact that one line adds and the other substracts may be explain why rounding errors will compensate each other over time, but we need a more detailed working of the PDP arithmetics to be really sure about that.

Actually, many CS students may recognize a prey-and-predator model, that creates very simply sinusoids that are dephased of pi/2. That is, if one is a sine, the other is a cosine.

masswerk 6 hours ago 1 reply      
See also here (PDP-1 emulation, running the Minskytron, Munching Squares, and more; includes a description of the program and a link to the annotated source code):http://www.masswerk.at/minskytron/

David Mapes invented the same algorithm, also on the PDP-1, at Lawrence Livermore National Laboratory (LLNL). While Minsky came across this algorithm by accident, Mapes arrived there by design. See http://www.masswerk.at/minskytron/davidmapes.html

osteele 6 hours ago 0 replies      
The book Minskys & Trinskys referenced in the article can be previewed here http://www.blurb.com/bookshare/app/index.html?bookId=2172660

The third co-author R.W. Gosper is Bill Gosper https://en.wikipedia.org/wiki/Bill_Gosper, who discovered the glider gun and the hashlife algorithm (linked from the Wikipedia page).

tehsauce 9 hours ago 1 reply      
I was curious to see that first demo in action, so I quickly whipped up a shadertoy


pkaye 11 hours ago 1 reply      
The circle algorithm seems to come from the trigonometric identities with small value approximations for b -> 0. ie. cos(b)~=1 and sin(b)~=b.



leni536 9 hours ago 0 replies      
The alternating updating of x and y is analogous to a leap-frog numerical solution of a harmonic oscillator (with y=dx/dt). If they are updated at the same time then it's explicit Euler.
tomerbd 4 hours ago 4 replies      
i see more and more people blogging on some-hacker-name.wordpress.com is it because you don't want to purchase a domain or something? what is the reason? (mere curiosity). why not for example then on blogger.com? why not on medium? I wonder if there is some specific reason i'm not aware of.. thanks.
IncRnd 8 hours ago 0 replies      
I remember seeing circle drawing code, written in Basic, on an Atari 400/800. I have remembered that code until this day, because it was simple and elegant.

Now that I have read this article, I can recognize that code was the Minsky Circle Algorithm.


Two museums having an informative fight on Twitter newstatesman.com
180 points by wglb  14 hours ago   30 comments top 13
wimagguc 1 hour ago 0 replies      
Im writing this from one of these museums (The Natural History Museum) after having been in the Science Museum yesterday. They are both well worth a visit and since they are next to each other and both donation-only, I definitely recommend to visit both. This Twitter thread as well, if anything, encourages the same thing: dont choose between the two, go for both.
firasd 4 hours ago 2 replies      
Okay just to help out my fellow long-suffering HN users I made a Medium account and posted these tweets there.


ninjaranter 11 hours ago 4 replies      
That's brilliant! I really wish Twitter had a better way of letting me follow the conversation between these two accounts without having to read through a few dozen tweets from other random people (and if there is one and I missed it, please let me know)
magic_beans 24 minutes ago 0 replies      
This was darling. I like to imagine the two Social Media Managers are dating.
laumars 5 hours ago 0 replies      
Both museums are really very good - for children and adults alike. Plus they're free to visit too.
zakki 4 hours ago 1 reply      
In Indonesia, We have a term for "fight on twitter":Twitwar or Twitwor.
kerkeslager 1 hour ago 0 replies      
tdy721 12 hours ago 1 reply      
I don't know how to add to this thread... But it's brilliant! Will someone ping the Smithsonian? That seems like it would unite those monarchists real quick. And we all know how that would go ^.~
brailsafe 7 hours ago 0 replies      
NHM has decent footing in their open source contributions. I vote for them. Also dinosaurs n stuff.
dghf 5 hours ago 1 reply      
Was that a locust eating a mouse?
SSLy 7 hours ago 0 replies      
This site is doing something very weird with the scrolling.
chairmanwow 5 hours ago 1 reply      
The fact that I have to rely upon a news site with some of the worst IN YOUR FACE ad UI to embed each tweet in this conversation thread individually is pretty damning for Twitter. Why can't I organically discover / follow this on Twitter itself? It really had so much potential for being the heartbeat of the Internet, and maybe for an age it was. However, they never innovated or evolved leaving their biggest innovation to be "140 characters".
fit2rule 4 hours ago 0 replies      
Some enterprising Brit hacker needs to turn this squabble into a real video game, STAT!
Show HN: Colors A data-driven collection of beautiful color palettes klart.co
38 points by drikerf  8 hours ago   12 comments top 4
ComputerGuru 25 minutes ago 0 replies      
I dont know if your data model (or general hypothesis) is working. There are some truly garish color combinations (too many to be a fluke) on the site, so Im thinking some colors work better as part of a work rather than as a stand-alone palette
tw1010 3 hours ago 1 reply      
What do you mean by "data-driven"? What algorithm are you using to produce the palettes?
crodrigues 2 hours ago 1 reply      
Nice! Though it seems a lot like http://coolors.co, any differentiating aspects?
The Beige Box Fades to Black (2002) nytimes.com
4 points by tosh  2 hours ago   1 comment top
cylinder714 14 minutes ago 0 replies      
Was the NeXT cube the first black PC? Here's a good look, for those of you who have never seen one: http://www.johnmiranda.com/next.htm
Whats New in Python 3.7 python.org
47 points by happy-go-lucky  1 hour ago   19 comments top 3
Kpourdeilami 0 minutes ago 0 replies      
Unrelated to python 3.7, but they recently added this feature that you can add two dictionaries together in 3.5 by doing:


new_dict = {dict1, dict2}


It is so handy and nice

dorfsmay 1 hour ago 5 replies      
"More than 255 arguments can now be passed to a function, and a function can now have more than 255 parameters."

There are human devs who needed this? Or is this a sign that AI bots are now involved in language design?

fulafel 1 hour ago 2 replies      
Switching the default encoding from ASCII to utf-8 sounds like a pretty big change.
Chrome to force .dev domains to HTTPS via preloaded HSTS ma.ttias.be
138 points by Mojah  7 hours ago   129 comments top 19
bpicolo 0 minutes ago 0 replies      
The main issue here is how much of a PITA it is to work with HTTPS locally. Self signing certs and forcing /etc/resolver/ configs is only half of it. Then you run into trouble with mobile emulators, proxying, etcetc. We have an automated version of it for devs, but it's out of necessity rather than anything else. It's a pain to deal with.
noinsight 6 hours ago 6 replies      
.test is an official IANA reserved special-use domain name that will never be delegated out. Use it. Problem solved.

I don't know why people thought they could start using random TLD's on their own, there was always the risk they could be delegated officially.


CydeWeys 2 hours ago 4 replies      
Hey everyone. I'm the Tech Lead of Google Registry and I'm the one behind this (and likely future) additions to the HSTS preload list. I might be able to answer some questions people have.

But to pre-emptively answer the most likely question: We see HTTPS everywhere as being fundamental to improving security of the Web. Ideally all websites everywhere would use HTTPS, but there's decades of inertia of that not being the case. HSTS is one tool to help nudge things towards an HTTPS everywhere future, which has really only become possible in the last few years thanks to the likes of Let's Encrypt.

hannob 5 hours ago 6 replies      
Just commented over at Matthias' blog, I'll just copy-paste it here:

First of all I think this is generally a good move. If people use random TLDs for testing then thats just bad practice and should be considered broken anyway.

But second I think using local host names should be considered a bad practice anyway, whether its reserved names like .test or arbitrary ones like .dev. The reason is that you cant get valid certificates for such domains. This has caused countless instances where people disable certificate validation for test code and then ship that code in production. Alternatively you can have a local CA and ship their root on your test systems, but thats messy and complicated.

Instead best practice IMHO is to have domains like bla.testing.example.com (where example.com is one of your normal domains) and get valid certificates for it. (In the case where you dont want to expose your local hostnames you can also use bla-testing.example.com and get a wildcard cert for *.example.com. Although ultimately Id say you just shouldnt consider hostnames to be a secret.)

hartror 5 hours ago 2 replies      
Use .localhost pointing at for local development. It is reserved for this purpose and obvious to everyone unlike .test.

For reference your options are:

 .test .example .invalid .localhost

With only .localhost fitting the purpose of most people's usage of .dev.

captainmuon 5 hours ago 0 replies      
They should do the opposite. There should be a .insecure domain where browsers accept HTTP or HTTPS with wrong or no certs, and pretend it is HTTPS with all consequences (e.g. loading of HTTPS third party resources). I wouldn't put it on the open net, but rather let people set it up internally for testing.
hobarrera 26 minutes ago 0 replies      
This is perfect and great. I'd love to see gradually (yes, GRADUALLY, without breaking anything!) all TLDs do this.

".localhost" has existed and been popular for local development for MANY years. I've no idea why somebody would use `.dev, but now that it's a registered TLD, using it locally is just asking for trouble.

Also, you can just use,,, etc.

tscs37 7 hours ago 3 replies      
Just as a note; .dev is not yet an official TLD, it's on the status "proposed" which means that google is basically the highest priority on the waiting list.

.foo is delegated and thusly a full TLD, yes.

On the other hand, you should not be using .localhost if the target is not running on your loopback interface, resolving localhost to anything but loopback is considered harmful.

I find .test or .intranet to be more useful for such installations, they are either designated as "cannot be a TLD" or are very very unlikely to become a TLD, respectively.

andrewaylett 6 hours ago 2 replies      
For my development needs, I try to either:

* Publish mDNS records to give myself extra `.local` names, or* Get a wildcard published in the organisation's internal DNS

If you can't do either of those, _please_ use `.test` as your test TLD, as it's explicitly set aside for that purpose so you know you're never going to collide with anyone.


ComputerGuru 1 hour ago 0 replies      
Ive never used .dev - but going back five or six years we set up a .dev sub domain of our domain and use that exclusively for development.

dev.ourdomain.net is a web-accessible server on our local network, configured as the dns server for that sub domain and is our internal CA trusted to issue the certs we use for development.

Kipters 6 hours ago 2 replies      
Another option is not using Chrome as the main dev browser. Firefox replaces it just fine.
0x0 6 hours ago 2 replies      
With all the hacks that people have put in place for using .dev locally, who in their right mind would want to even register and use a .dev domain? :P
donatj 2 hours ago 0 replies      
We have always used local.{site}.com as a sub domain rather than tld. Makes CORS rules simpler, and we actually have a real dns record pointing to so we don't have to bother with HOSTS
apatheticonion 2 hours ago 0 replies      
*.localhost is a cool idea, would be cool if it allowed self-signed certificates as valid, or even have the browser do some magic and pretended it had an ssl certificate.
noway421 6 hours ago 1 reply      
With .test thrown around a lot, would there be any complementary support from browser vendors for that TLD to be specifically a development tld? localhost is recognised to be one by chrome for example, that's the only domain where html5 geo api works without https, and "your passwords are transferred via plain text" is not displayed. In order to help shift to .test google might alter it's heuristics to recognise .test as a common tld used for development.
ramses0 36 minutes ago 0 replies      

 Sorry for top-leveling a grand-child comment, but reading between the lines, this is the attack vector: > And for the last question: Again, there are no .dev domain > names. There never have been. It's never been available for > registration. The recommendation for a long time has been to > only use either (a) domain names that you actually own, or (b) > domain names that are reserved for testing and are guaranteed > never to exist a la RFC 2606. Using domain names for testing > that don't yet exist but could in the future is a huge security > hole that you must fix now. Do it now while the domain names > still fail to resolve. Once they resolve, and you don't own > them, then your security situation gets a lot worse. Google is concerned with nation-state attacks. This means they have to assume ninja-assassin-scuba-divers have tapped all their cables underground. They're also concerned about ninja-assassin-usb-stick-droppers, and all kinds of other use cases. What they're doing is: 1) Requiring *.dev to match PRE-LOADED HSTS certs. This allows google to "safely" boot up a computer from scratch. Just so long as "clone-a-computer-from-scratch.dev" matches the public/private handshake for HSTS/HTTPS then google knows that no MITM, no nation state DNS takeover, etc. is possible. So long as the VERY FIRST CONTACT WITH THE INTERNET is a *.dev domain, then that computer can be "as secure as possibly known". 2) Forcing people to bounce "off" of invalid TLD's as a network administration method. Remember, google is concerned about nation states. Remember wanna-cry? How it was disabled by some random researcher registering xyz-abc-123.com? That attack costs $15. Now imagine a nation-state, intentionally registering a gTLD of "\*.haha-now-your-company-infra-is-pwnd" which they somehow glean is the gTLD your developers use for local development / testing / intranet portal. If you could spoof IBM's intranet by doing something like: "http://www.welcome.ibm" or "https://www.welcome.ibm" (b/c the *.ibm wasn't cert-pinned.....) then you could trivially cause *.ibm to resolve to some sort of spoofed site to collect passwords. Or what if they're catching `mysql -uroot -pxyz staging.product.ibm`? Whoops. Or... perhaps another gTLD we'll see google register is "*.go" or maybe their internal builds of chrome already do cert-pinning on that. (Reason is I've seen/heard they allow 'http://go/my-internal-shortlink' ... I know that other tech companies have had similar setups). Same attack vector. You control the DNS, you control ALL responses. And when somebody types www.microsoft.com, it may be _impossible_ to know if that "Down for Maintenance" banner is real or fake if their DNS is controlled by somebody who really is your enemy.

onion2k 6 hours ago 0 replies      
I don't really see this as a problem. In fact, I wish Chrome would do that for every gTLD, but obviously that's not going to happen any time soon. Secure by default would be great.

The real issue (for me at least) is that it's far too much of a pain to run an SSL secured site locally. It can be done, but doesn't work well across teams given you need to register your certificates locally. Being able to serve a site from a Vagrant box or a Docker container over https in a way that a browser will accept (or even just pretend to accept) would be immensely helpful. I'm sure web developers and browser vendors are trying to resolve the problem already, but it can't come soon enough in my opinion.

kuschku 3 hours ago 1 reply      
The most annoying part here is that Google isnt even using .dev as public TLD they purely use it for internal testing, and all registered .dev domains resolve to 127.x.x.x addresses.

.dev should have been entirely reserved, or made available publicly. Registering a TLD just for your own internal testing, and forcing everyone to switch away, is the most user unfriendly move you can do.

frik 6 hours ago 5 replies      
I test with HTTP, locally.

This forcing of opinionated things goes on my nerves. How about develop the browser, and let the mass decide what they use. Amazon was 100% HTTP for 20 years (except the single login page) - it worked very well.

       cached 17 September 2017 16:02:01 GMT