hacker news with inline top comments    .. more ..    11 Aug 2016 News
home   ask   best   3 years ago   
I Just Drove Eight Hours on Tesla Autopilot and Lived to Tell the Tale bloomberg.com
22 points by nreece  41 minutes ago   15 comments top 5
steven777400 27 minutes ago 4 replies      
Put aside the silly title and the inconsistency between the subtitle and content ("the only real problem was the driver behind the wheel" vs "The sensors failed to register a rapidly accelerating gold-colored SUV beside me and would have driven directly into its path had I not quickly taken over") and this article has a really good point to make:

As we get closer but not entirely to autonomous vehicles, figuring out how to communicate to users (at what point do we stop calling them drivers?) the necessary level of engagement (and notifying them when that level changes) is going to be increasingly critical.

flashman 0 minutes ago 0 replies      
It looks bad that the Tesla tried to merge in front of an SUV. But on the flip side, if the SUV had also had Autopilot, maybe it would have been a less dangerous situation. And when more cars have autopilot systems, maybe they'll be better able to communicate their proximity to one another.
cesarb 4 minutes ago 0 replies      
I like this video, which shows the Tesla autopilot in action with the driver-side display (which is usually hidden by the wheel) visible: https://www.youtube.com/watch?v=ZetTg73Ebyo

Looking at it, it seems that the autopilot mostly adds another layer of indirection. Instead of driving directly, the driver gives the autopilot orders like "maximum speed 60" or "move one lane left/right". The driver still has to pay attention to what's happening around to give it the correct orders. In the example given in the article, the driver still has to check whether the lane is free before giving the car an order to move into that lane.

jgamman 24 minutes ago 1 reply      
when you call it Autopilot and not SuperiorCruiseControl you've set expectations. It's no use trying to re-define the term more accurately after the fact. They'll survive but the halo got dented.
DannyBee 21 minutes ago 1 reply      
Am I wrong in thinking that the problems complained about here are precisely because it uses sensors and a camera and not lidar?

(I did a bit of research, and reading around says "probably", but i trust HN's opinion a bit more :P)

This JPEG is also a webpage coredump.cx
543 points by cocoflunchy  9 hours ago   180 comments top 40
daeken 8 hours ago 3 replies      
I abused this concept to compress demo code in PNG files, with great success. http://demoseen.com/blog/2011-08-31_Superpacking_JS_Demos.ht...

This is, at present, the most efficient way to pack demos on the web; a few characters of uncompressed bootstrap code, then the rest is deflated.

loudmax 8 hours ago 2 replies      
You can see this in action for yourself on a unix cli:

 $ curl -o squirrel.html http://lcamtuf.coredump.cx/squirrel/ $ file squirrel.html squirrel.html: JPEG image data, JFIF standard 1.01, comment: "<html><body><style>body { visibility: hidden; } .n { visibilit"
Open the file in a browser and read the page. Then:

 $ mv squirrel.html squirrel.jpg
Open the renamed file in a browser and only the image appears.

I'm not sure what the security implications are. I'm not creative or devious enough to think of anything offhand, but a lot of attack vectors start off with this sort of misdirection.

slice-beans 7 hours ago 3 replies      
Interestingly, this page is intercepted by my router which then just sends me a redirect to one of its settings pages. Odd.
kbenson 8 hours ago 1 reply      
Prior discussion, years ago, many comments: https://news.ycombinator.com/item?id=4209052
hardmath123 7 hours ago 1 reply      
Some PoC||GTFO PDFs are also valid in other formats"polyglots". They usually do PDF+HTML+ZIP, though sometimes they get (even more) creative.


imurray 8 hours ago 1 reply      
Right-clicking on the image and selecting "View Image" (Firefox), or "Open image in a new tab" (Chromium), gives the webpage, not the image. I can see why that happens: the menu items just open a URL and don't force it to be an image. However, it was a bit disorienting.
danbruc 8 hours ago 5 replies      
A testament to one of the worst decisions in computing history - not to fail displaying a web page with an error message in case it is not a valid HTML document.
nashashmi 8 hours ago 0 replies      
I didn't know what he was talking about until I tried this:

 data:text/html, <html><img src="http://lcamtuf.coredump.cx/squirrel/"></html>
Put that in the url of the browser.

amavisca 8 hours ago 2 replies      
This site uses the xmp tag (deprecated in HTML 3.2, removed in HTML5) which I found interesting and had never seen!


It's similar to the pre tag but doesn't require the escaping. I guess you just have to make sure you don't have a closing xmp tag :)

haddr 30 minutes ago 1 reply      
Can I upload such image for instance to facebook and intent to run it as html (with some JS inside)?
ComodoHacker 6 hours ago 2 replies      
>No server-side hacks involved

I doubt this. In request with Accept:"image/png,image/;q=0.8,/*;q=0.5" server souldn't respond with something with Content-Type:"text/html"

tomw1808 8 hours ago 7 replies      
> Pretty radical, eh? Send money to: lcamtuf@coredump.cx

How to send money to your email address? Not that I would send you some, but I wondered how you want to have that money received?

tarball 6 hours ago 0 replies      
Here is a similar experimental project I made, between image and web page : http://raphaelbastide.com/guropoli/
aioprisan 8 hours ago 2 replies      
So in theory, can analytics platforms be compromised so that JPEG tracking pixels could turn into full-fledged sites interfering with the parent page at, say, a bank website? Firing off credentials in the background?
blahpro 8 hours ago 0 replies      
I like this bit: <img src="#" [...]>
tgarma1234 3 hours ago 0 replies      
Well this will certainly appeal to Steganography enthusiasts and perverts who have clumsily been loafing around .onion sites for years and who now finally have a way to share content in the clear. And of course the NSA, FBI and CIA are suddenly stuck trying to figure out why this goofy squirrel is so popular in Yemen.

mv squirrel.html squirrel.jpgsudo apt-get install steghidesteghide embed -cf squirrel.jpg -ef secret.txtmv squirrel.jpg squirrel.html

And voila...

pix64 2 hours ago 0 replies      
Neat idea. Here is a BMP I just threw together that is also a webpage.


koytch 2 hours ago 0 replies      
Along similar lines: https://www.alchemistowl.org/pocorgtfo/

Some of the PDFs also happen to be valid images, audio files, zip archives, etc.

d33 8 hours ago 2 replies      
...how is that possible?
fsiefken 8 hours ago 1 reply      
great hack, could you get javascript working inside a jpeg as well? Or obfuscate the javascript and decrypt in the browser for steganographic purposes?
tannerc 6 hours ago 0 replies      
Curious, how do search engine crawlers interpret this? Would it be the same as a browser, i.e. the bot would treat the requested img url respectfully?
throwawayReply 8 hours ago 2 replies      
I appreciate the technical trickery in this version, but has it not been possible to do this since at least 1996[1] by having the server serve different files based on the "Accept" http header?

[1] https://www.w3.org/Protocols/HTTP/1.0/spec.html#Accept

grimmdude 5 hours ago 1 reply      
This is cool. Though the 133kb download size for the html isn't great.
metrognome 7 hours ago 1 reply      
You could do the same thing with a .wav file, embedding the HTML after the data sub-chunk. Adobe Audition uses this method to embed application-specific metadata for the file (marker and sub-marker locations, for example).
rcthompson 6 hours ago 0 replies      
I wonder if browsers are smart enough to only download the file once for the html and then cache it for the embedded image.
webXL 6 hours ago 0 replies      
DO NOT run Chrome's Timeline dev tool on this when reloading. Crashed a couple tabs.

Good edge case for browser tests!

soheil 3 hours ago 0 replies      
Can you make it play a video too?
EGreg 7 hours ago 2 replies      
Can someone explain in simple English how this works?
pritianka 5 hours ago 0 replies      
Reminds me of Pied Piper for some reason :-)
chrischen 7 hours ago 1 reply      
This would be a unique way to make downloading images harder.
caub 3 hours ago 0 replies      
anchors are not interpreted as html
pschastain 4 hours ago 0 replies      
No explanation of what's going on?
ausjke 8 hours ago 2 replies      
You can use URI to embed images too, not sure how this is done though, why not just use URI?
hughbetcha 2 hours ago 0 replies      
Cette jpg est pas une pipe.
kessiler 8 hours ago 0 replies      
nice trick!
gkya 8 hours ago 1 reply      
C'mon, don't be so stingy, give him a Ben at least :)

 _____________________________________________________________________ | | | =================================================================== | | |%/^\\%&%&%&%&%&%&%&%&{ Federal Reserve Note }%&%&%&%&%&%&%&%&//^\%| | | |/inn\)===============------------------------===============(/inn\| | | |\|UU/ { UNITED STATES OF AMERICA } \|UU/| | | |&\-/ ~~~~~~~~ ~~~~~~~~~~=====~~~~~~~~~~~ P8188928246 \-/&| | | |%//) ~~~_~~~~~ // ___ \\ (\\%| | | |&(/ 13 /_\ // /_ _\ \\ ~~~~~~~~ 13 \)&| | | |%\\ // \\ :| |/ ~ \| |: 3.21 /| /\ /\ //%| | | |&\\\ ((iR$)> }:P ebp || |"- -"| || || |||| |||| ///&| | | |%\\)) \\_// sge || (|e,e|? || || |||| |||| ((//%| | | |&))/ \_/ :| `._^_,' |: || |||| |||| \((&| | | |%//) \\ \\=// // || |||| |||| (\\%| | | |&// R265402524K \\U/_/ // series || \/ \/ \\&| | | |%/> 13 _\\___//_ 1932 13 <\%| | | |&/^\ Treasurer ______{Franklin}________ Secretary /^\&| | | |/inn\ ))--------------------(( /inn\| | | |)|UU(================/ ONE HUNDERED DOLLARS \================)|UU(| | | |{===}%&%&%&%&%&%&%&%&%a%a%a%a%a%a%a%a%a%a%a%a%&%&%&%&%&%&%&%&{===}| | | ==================================================================== | |______________________________________________________________________|
source: http://chris.com/ascii/index.php?art=objects/money

a1k0n 8 hours ago 7 replies      
That's a chipmunk, not a squirrel.
Shape of errors to come rust-lang.org
266 points by runesoerensen  7 hours ago   102 comments top 13
cfallin 7 hours ago 7 replies      
I really appreciate the user-friendliness of Rust's error messages -- I can't remember seeing a compiler tell me "maybe try this instead?" before (perhaps something from Clang, but never with the specificity of, e.g., a suggested lifetime annotation). And from a parsing / compiler-hacking perspective, it seems really hard to get the heuristics good enough to produce the "right" root cause. Kudos to the Rust team for this continued focus!
justinsaccount 3 hours ago 1 reply      
In my little rust experience the suggestions the error messages had, even in the longer explanations, were useless. Mostly it came down to me trying to do something that was simply not supported, but the compiler not knowing that and leading me on a wild goose chase.

From what I remember, I was trying to use the iterator trait, but return records that contained str pointers.. The goal being to parse a file and yield pointers to the strings from each line to avoid allocating memory and copying bytes around. Rust tries to tell you that you need lifetime specifiers, but if you try adding those nothing compiles anyway because the iterator stuff doesn't actually support that.

I eventually got it to work by returning copies of the strings.. maybe the unsafe stuff would have done what I wanted, that's what rust-csv seems to do at least.

I concluded that Rust is definitely not a language you can learn by chasing down error messages and incrementally fixing things. If you don't 100% understand the borrow and lifetime stuff, fixing one error is just going to cause two new ones.

dllthomas 5 hours ago 1 reply      
Please put file locations (as many as might be relevant) at the start of a line in the standard format!

The other changes look valuable. Improving error reporting is great.

Edited to add emphasis: I really did mean a line, not every line.

gnuvince 6 hours ago 0 replies      
Very good! I always liked the content of Rust's error messages as it clearly explained the issue, but the form of those error messages was a bit problematic, they were very noisy and it wasn't easy to see the issue by simply glancing, you had to scroll up, find the beginning of the error and read carefully.
waynenilsen 5 hours ago 3 replies      
I would love to see an option for showing the error order forward or backward. My workflow is to start fixing compile time errors from the top of the `cargo` output but scrolling to the top can be fairly annoying when there are a lot of errors. Having the most relevant error at the bottom of the command line (where it automatically scrolls) would be useful as an option IMO. This probably causes some other unseen problems however
ColinDabritz 4 hours ago 0 replies      
I love the clarity and readability of these errors. You can work on UX at lower levels, and it looks like this. Beautiful. I'm not even a Rust dev, I'm mostly in C# land these days, but I appreciate the effort this takes. Well done!
Animats 4 hours ago 2 replies      
Imagine what you could do if error messages didn't have to appear as plain text. You could have arrows, lines, shaded areas, and even multiple fonts.
mrich 3 hours ago 1 reply      
Great to see an improvement over the already improved error reporting established by clang and later adopted by gcc.

However I don't understand why backticks are still being used - they tend to look ugly especially when pasting them into emails etc.

marsrover 4 hours ago 3 replies      
Not related to this article, but I was looking through the Rust survey mentioned at the bottom of the article and was surprised at the amount of people using it for web development.

I'm not very knowledgeable about Rust but I guess I assumed it would not be the best technology to use in that space. Is Rust really that prevalent in web development?

zalmoxes 7 hours ago 3 replies      
Inspired by Elm?
Symmetry 4 hours ago 0 replies      
That looks really cool and I'll have to give learning Rust another try when this lands. Also, the title was pretty wonderful. I spent a bit thinking about it before looking at the domain and realizing what it had to be.
hyperpape 5 hours ago 1 reply      
Do I understand correctly that since this is in the current nightlies, it's slated for 1.12? So, sometime in October?
knodi 5 hours ago 0 replies      
damn... thats nice. Can't wait to dive into it.
CPUs are optimized for video games moderncrypto.org
229 points by zx2c4  5 hours ago   202 comments top 17
sapphireblue 4 hours ago 17 replies      
This may be an unpopular opinion, but I find it completely fine and reasonable that CPUs are optimized for games and weakly optimized for crypto, because games are what people want.

Sometimes I can't help but wonder how the world where there is no need to spend endless billions on "cybersecurity", "infosec" would look like. Perhaps these billions would be used to create more value for the people. I find it insane that so much money and manpower is spent on scrambling the data to "secure" it from vandal-ish script kiddies (sometimes hired by governments), there is definitely something unhealthy about it.

pcwalton 4 hours ago 4 replies      
Games are also representative of the apps that actually squeeze the performance out of CPUs. When you look at most desktop apps and Web servers, you see enormous wastes of CPU cycles. This is because development velocity, ease of development, and language ecosystems (Ruby on Rails, node.js, PHP, etc.) take priority over using the hardware efficiently in those domains. I don't think this is necessarily a huge problem; however, it does mean that CPU vendors are disincentivized to optimize for e.g. your startup's Ruby on Rails app, since the problem (if there is one) is that Ruby isn't using the functionality that already exists, not that the hardware doesn't have the right functionality available.
speeder 2 hours ago 5 replies      
As a gamedev I found that... weird.

A CPU for games would have very fast cores, larger cache, faster (less latency) branch prediction, fast apu and double floating point.

Few games care about multicore, many "rules" are completely serial, and more cores doesn't help.

Also, gigantic simd is nice, but most games never use it, unless it is ancient, because compatibility with old machines is important to have wide market.

And again, many cpu demanding games are running serial algorithms with serial data, matrix are usually only essential to stuff that the gpu is doing anyway.

To me, cpus are instead are optimized for intel biggest clients (server and office machines)

Narann 4 hours ago 3 replies      
The real quote would have been:

> Do CPU designers spend area on niche operations such as _binary-field_ multiplication? Sometimes, yes, but not much area. Given how CPUs are actually used, CPU designers see vastly more benefit to spending area on, e.g., vectorized floating-point multipliers.

So, CPUs are not "optimized for video games", they are optimized for "vectorized floating-point multipliers". Something video game (and many others) benefits from.

joseraul 3 hours ago 0 replies      
TL;DRTo please the gaming market, CPUs develop large SIMD operations.ChaCha uses SIMD so it gets faster. AES needs array lookups (for its S-Box) and gets stuck.
wmf 4 hours ago 0 replies      
Maybe a better headline would be something like "How software crypto can be as fast as hardware crypto". I was curious about this after the WireGuard announcement so thanks to DJB for the explanation.
nitwit005 3 hours ago 1 reply      
Not really. Just look through the feature lists of some newer processors:

AES encryption support: https://en.wikipedia.org/wiki/AES_instruction_set

Hardware video encoding/decoding support (I presume for phones): https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video

It's more that it's relatively easy to make some instruction useful to a variety of video game problems, but difficult to do the same for encryption or compression. You tend to end up with hardware support for specific standards.

magila 3 hours ago 1 reply      
One important aspect DJB ignores is power efficiency. ChaCha achieves its high speed by using the CPU's vector units, which consume huge amounts of power when running at peak load. Dedicated AES-GCM hardware can achieve the same performance at a fraction of the power consumption, which is an important consideration for both mobile and datacenter applications.

Gamers generally don't care about power consumption. When you've spent $1000 on the hardware an extra dollar or two on your electricity bill is no big deal.

wangchow 42 minutes ago 1 reply      
The form-factor for laptop screens are built for media consumption, even though the square form-factor is superior for productivity (I found an old Sony Vaio and the screen form-factor felt very pleasant). Seems the general consumption of media has dominated CPU design in addition to everything else in our computers.
joaomacp 4 hours ago 1 reply      
Of course. Gamers are the biggest consumers of new, top of the line PC hardware.
rdtsc 3 hours ago 1 reply      
Wonder how a POWER8 CPU would handle it or if it is optimized differently. It obviously is not geared for the gaming market.
tgarma1234 1 hour ago 1 reply      
I really find it hard to believe that people for whom such an interest in security at the CPU level would buy "retail" processors like you and me have access to. I am no expert in the field but it just seems weird that there isn't a market for and producer of specialized processors that are more militarized or something. Why does everyone have access to the same Intel chips? I doubt that's actually the case. Am I wrong?
xenadu02 3 hours ago 0 replies      
tl;dr: AES uses branches and is not optimized for vectorization. Other (newer) algorithms are designed with branchless vectorization in mind, which makes specialized hardware instructions unnecessary.
Philipp__ 3 hours ago 2 replies      
ARMA III could be the good example of CPU bottleneck. Or maybe it is badly optimized... Then we hit the hot topic of multicore vs singlecore performance.
wscott 1 hour ago 0 replies      
No, Intel CPUs are optimized to simulate CPUs

Some stories from back around 2000 when designing CPUs at Intel. Some people did bemoan the fact the few software actually needed the performance in the processors we were building. One of the benchmarks where the performance is actually needed was ripping DVDs. That lead to the unofficial saying "The future of CPU performance is in copyright infringement." (Not seriously, mind you)

However, here is a case where the CPUs were actually modified to improve one certain program.

From: https://www.cs.rice.edu/~vardi/comp607/bentley.pdf (section 2.3)

"We ran these simulation models on either interactive workstations or compute servers initially, these were legacy IBM RS6Ks running AIX, but over the course of the project we transitioned to using mostly Pentium III based systems running Linux. The full-chip model ran at speeds ranging from 05-0.6 Hz on the oldest RS6K machines to 3-5 Hz on the Pentium III based systems (we have recently started to deploy Pentium 4 based systems into our computing pool and are seeing full-chip SRTL model simulation speeds of around 15 Hz on these machines)"

You can see that the P6-based processors (PIII) were a lot faster than the RS6K's and theWmt version (P4) was faster still? That program is csim and it is a program that does areally dumb translation of the SRTL model of the chip (think verilog) to C code that thengets compiled with GCC. (the Intel compiler choked) That code was huge and it had loopswith 2M basic blocks. It totally didn't fit in any instruction cache for processors. Mostprocessors assume they are running from the instruction cache and stall when reading frommemory. Since running csim is one of the testcases we used when evaluating performance the frontend was designed to execute directly from memory. The frontend would pipelinecacheline fetches from memory which the decoders would unpack in parallel. It could executeat the memory read bandwidth. This was improved more on Wmt. This behavior probably helpssome other read programs now, but at the time this was the only case we saw where it reallymattered.

The end of the section is unrelated but fun:

"By tapeout we were averaging 5-6 billion cycles per week and hadaccumulated over 200 billion (to be precise, 2.384 * 1011) SRTLsimulation cycles of all types. This may sound like a lot, but toput it into perspective, it is roughly equivalent to 2 minutes on asingle 1 GHz CPU!"

Games were important but at the time most of the performance came from the graphics card.In recent years Intel has improved the on-chip graphics and offloaded some of the 3d work to the processor using these vector extensions. That is to reclaim the money going to the graphic card companies.

revelation 4 hours ago 4 replies      
I thought modern video games are predominantly limited by GPU performance? Maybe the argument is that while usually CPU performance isn't the most important part of the equation, video gamers base their purchasing decision on misguided benchmarks that expose it.

The big CPU hog and prime candidates for these vector operations nowadays seems to be video encoding.

DINKDINK 1 hour ago 0 replies      
Off-topic: That's a great favicon
Show HN: Generating fantasy maps an interactive exploration mewo2.com
808 points by mewo2  13 hours ago   49 comments top 30
aarondf 10 hours ago 0 replies      
I have nothing super valuable to add except to say: this is totally awesome. Good for you for exploring this and sharing it with the world. I just emailed it to 3 non-techy friends who will totally love it.

Keep making, keep sharing!

chippy 9 hours ago 1 reply      
If you like making fiction maps by hand (but not fantasy maps) - have a look at this great mapping project: Open Geo Fiction: http://www.opengeofiction.net

From the about page: "This world is set in modern times, so it doesn't have orcs or elves, but rather power plants, motorways and housing projects. But also picturesque old towns, beautiful national parks and lonely beaches. "

It's essentially a fictional OpenStreetMap, and actually uses all the same stack as OSM, with all the data as Creative Commons Attribution-NonCommercial-ShareAlike

algorias 3 hours ago 1 reply      
The tricky task of label placement could be outsourced to a SAT solver.

The way it works is that for every city, town etc you generate a few placement candidates (4 positions around the point like you do seems fine) and then calculate all the pairs of placements that collide. For each collision you add a clause to a SAT formula that forbids this combination from occurring. Every solution of this formula will be a clean labeling of your map.

caio1982 11 hours ago 0 replies      
Your http://mewo2.com/notes/naming-language is equally interesting, great work!
loeber 10 hours ago 3 replies      
Very cool! This reminds me of Amit P's polygonal map generation project, which you should look at if you're interested in this kind of stuff. http://www-cs-students.stanford.edu/~amitp/game-programming/...
tdeang 10 hours ago 0 replies      
If I had this as a kid I would have buried myself in the basement playing D&D for the rest of my life.
abelhabel 9 hours ago 0 replies      
This is such a great tool for hobby world designers. I know for sure that I will use this when I create adventures.

I made multiple tools for random world generation but never come close to this kind of quality. I'm impressed!

bovermyer 21 minutes ago 0 replies      
mewo2, I am immensely grateful to you for devoting the time and energy to a task I've been meaning to undertake (and thus value), but have yet to find the time.

Your success is inspiring, and I've forked your repo(s) to try and continue your work. Thank you so much!

pmontra 12 hours ago 1 reply      
Well done, thanks!

BTW, this is a quick way to generate an higher resolution map on the site. Open the developer tools, remove the width from .note (it's the container of the column), inspect the map at the bottom and set the height and width of the canvas to suit your needs. Then click on the Generate button.

Maybe the page could be changed to extract that canvas from the column layout and make it fit the viewport.

quux 8 hours ago 1 reply      
Reminds me of a bit in one of Neal Stephenson's books where a MMORPG company hired a team of geologists to generate a geologically plausible map for their game. The hardest part of their job was finding ways to integrate the parts of the world that had been made up without any regard geology and made no sense in their model.
agentultra 11 hours ago 0 replies      
This belongs to the class of teleological algorithms and is very cool! I appreciate the links to some of the source material the author learned from... and the interactive elements on the page are great. I'd like to do the same for my blog.

Nice work!

Animats 3 hours ago 0 replies      
Cute. Usually this is done with fractals, as with VistaPro and its successors. You generate a coarse random height field, then subdivide, making smaller random changes locally, until you have all the detail you want.

An ambitious project: take in fantasy novels and extract location cues from them, then draw a map. Find text which mentions a place, then try to recognize phrases which express distance and direction.

vblord 11 hours ago 1 reply      
This is really cool. I'm going to bookmark this and look in to it more in-depth. I've always wondered about how to generate maps for a game. Good job.
kylepdm 6 hours ago 0 replies      
That is super sweet. It looks sort of similar to how Dwarf Fortress generates its geographies.

You should look at how that game does it because it also involves creating a whole mythology and history to help generate civilizations and their fall/rise.

percept 11 hours ago 3 replies      
If the novels were as bad as he says they were, maybe he can crank out some random(ly bad) prose to go with it, and get Amazon-rich.

(And/or, this might make an interesting companion project.)

stephenmm 8 hours ago 0 replies      
Would be neat to give it real topo maps and then have it generate the towns and see how well it matches reality. Great work!
yousry 11 hours ago 1 reply      
Your post is really helpful. I recently tried to create procedural algorithms for medieval maps. I started with path-generation and circular city layouts.

Here is my try on paths generation:http://imgur.com/CUs6P4S

mhd 8 hours ago 0 replies      
Now I want to make a vector rogue-like with Tolkien-ish world maps...
jscardella 6 hours ago 0 replies      
This is an amazing use of Python. I'm OK with .py but terrible with images so this random generation amazes me. I will be passing this into my DM!
LoSboccacc 11 hours ago 0 replies      
very cool. tried to see if it could make sense of islands and such, and the results are extremely convincing: https://i.imgur.com/LXNtZLH.png
devniel 3 hours ago 0 replies      
That's really cool, now just let the community create stories around these maps. Great work!
bondpbond 1 hour ago 0 replies      
Or I could just visit a random planet in No Mans Sky and use that as my model.
fapjacks 5 hours ago 0 replies      
I found this entire post to be completely awesome, but laughed in particular with this line: "I have a programmer's superstitions about always using powers of 2, which are more pleasing to the spirit of the machine." Also, I share similar fond memories of those maps from cheap, grocery store fantasy books!
bbctol 10 hours ago 0 replies      
This is truly fantastic, both the project and the interaction. If you want to continue with it, it seems there's so much more you could do, too: roads, forests, altering namelists for different regions...
sybhn 8 hours ago 0 replies      
I too, like the author, often was more interested in the topography and maps of fantasy stories. This is awesome!
yawz 9 hours ago 0 replies      
This is amazing!This motivates me to do so many things... alas, none related to my day job.
TDL 10 hours ago 0 replies      
This is very neat, I hope I will have time to dig into this at some later date. Great job!
weka 8 hours ago 0 replies      
Well done. I need to explore this later but well-done
moultano 9 hours ago 0 replies      
Everything looks great except the rendering of the mountains.
kaik 8 hours ago 0 replies      
This is sooo cool! Thanks for sharing!!
Intero for Emacs: complete interactive development program for Haskell commercialhaskell.github.io
78 points by primodemus  5 hours ago   15 comments top 6
cm3 3 hours ago 0 replies      
There should be a warning that on first start of intero-mode, this will initiate a rather long stack operation, which I suppose is extra long if you hadn't used stack before. Also, it seems to insist on a stack'ised project. But I suppose one could say that should be obvious. I use cabal because stack misses some features and is a little weird and use stack when I have to. And even though I have intero installed and in $PATH, when opening an actual project with a stack file, it start building intero. Weird automatism.
jesserosenthal 3 hours ago 0 replies      
This looks great! My only complaint is that sometimes I do want to work on a quick script outside of a stack environment, and this seems like an all-or-nothing proposition. (I use haskell for pandoc filters quite a bit, for example, and setting up stack for that seems like overkill.) I'm sure a bit of elisp could make it fall back to haskell mode if it can't find a local .stack dir.
mark_l_watson 3 hours ago 0 replies      
I saw this notice on Reddit early yesterday morning and ended up using Intero for most of the day. Love it! There are a lot of improvements over and above the Emacs Haskell mode. I really like how fast it is to control-c-l and be put n the correct module and be running in seconds. I also like the realtime syntax error hints, type information, etc.
pash 2 hours ago 3 replies      
I've been waiting for this release before finally trying to switch from Vim (MacVim) to Spacemacs. Anybody have any tips for somebody who's never really used Emacs?
avindroth 1 hour ago 0 replies      
Haskell changed my life. Then Intero did it again.
seagreen 2 hours ago 3 replies      
Looks like I've got a choice to make (all on NixOS):

+ Stick with NeoVim and hope one of the Intero ports to that editor takes off (as I understand it there are a couple right now)

+ Switch to Emacs+Evil+Intero. I've never used Emacs before (even with Evil) so it would be a little work.

If someone more up-to-date on the situation has advice that would be awesome:)

The Great Productivity Puzzle newyorker.com
49 points by jseliger  4 hours ago   39 comments top 12
Animats 1 hour ago 3 replies      
Something that's puzzled me for a while: why haven't we hit "peak office" yet? Why are there still so many people working in offices? By now, we should have many abandoned office blocks, much as we have abandoned factories in the Rust Belt. But that's not happening. We have all this office automation, but too many people in offices.

Are marketing cost and G&A (general and administrative) increasing as a fraction of cost of goods/services sold? Perhaps it's the overhead of capitalism. Advertising just moves demand around; where the savings rate is low, it can't create it. There are a surprising number of products, from movies to telephone service, where the marketing cost exceeds the cost of providing the product or service.

One problem with a winner-take-all economy is that the cost of competition goes up, because being slightly better at the margin becomes more expensive. It's like sports; the additional input required for 1% more performance is a lot more than 1%, once you're close to the record.

throw_away_777 54 minutes ago 0 replies      
Productivity increases are going to the top earners, as wages stagnate for most of the population.http://inequality.org/income-inequality/

This has many reasons behind it, including political factors such as the decline of unions and increase of free trade.

thedevil 2 hours ago 1 reply      
A few rants (mixed in with a few half-baked ideas):

1) Economic progress from the industrial revolution is about quantity. We learned to make a lot more stuff. The industrial revolution is not quite done, but the impact is less and less every decade. We may never match that quantity growth of stuff because we don't even want more stuff, we only want better stuff. And we're getting better stuff and more variety.

2) Info. tech. IS being mismeasured and it's huge. We're not all imagining it. The economics of info tech are REALLY good except in the official GDP numbers. And arguing that it's not counted in economic measurements doesn't debunk the mismeasurement claim, it supports/explains it. What's more, info tech decimates a lot of industries - music, GPS, newspapers, travel agents, etc. And self-driving cars are probably going to decimate the auto industry and wipe out a lot of logistics related jobs. Everyone(A) will be better off but official GDP will fall.

3) Wages aren't growing largely because the economy needs more skilled and more managerial workers, but we don't seem to be producing a lot of them(B). It's an awesome time to be a skilled worker. Good and growing pay and billboard signs desperately seeking talent. There's no end to work that needs to be done with the right skills. Meanwhile, unskilled workers aren't so useful anymore. They were well off when anybody could sit in an assembly line and produce a lot of stuff. But now unskilled workers don't really produce, they mostly trade their time for other people's time (e.g. by cooking their food) and they do so from a weak negotiating position. I don't see any of this changing anytime soon.

(A) Everyone = the vast majority of people

(B) Anecdotal, I could be wrong

nostromo 2 hours ago 2 replies      
I hear classic economic thinking is that productivity gains lead to higher wages. But could the causation be reversed? Cheap labor tends to disincentivize innovations that can lead to productivity gains.

Perhaps we're entering a period where there are simply too many people for too few jobs. This keeps unemployment high, wages flat, and removes the incentive for companies to invest in automation.

greenmountin 1 hour ago 0 replies      
One approach to understanding low productivity that I find interesting is covered in Jason Furman's very readable "Is This Time Different? The Opportunities and Challenges of Artificial Intelligence" report [1] -- he's one of Obama's lead economists.

In short, productivity growth may be lagging because of AI etc; it's because people have to spend time retraining for jobs not obsoleted by new technologies. He briefly states his rejection of Gordon.

[1] https://www.whitehouse.gov/sites/default/files/page/files/20...

mcguire 59 minutes ago 0 replies      
There is something radically wrong with how they measure productivity. I don't know what it is, but the details matter.

I remember the first verse of this song, back in the '90s when economists were wondering if information technology was a productivity sink. All of this new fun stuff was everywhere, but productivity measurements were going down. At the same time, just-in-time inventory management was acknowledged as the greatest innovation since sliced bread---and you can't do that without information technology. And the changes since then have only made IT more central and that has reduced the friction to do essentially everything else.

darawk 1 hour ago 0 replies      
Am I missing something or did the 'debunking' of theory #1 not hold any water?

> First, they said, mismeasurement has always been an issue in the information-technology sector, and it was just as big an issue in the period from 1995 to 2005, when productivity was growing rapidly.

If your hypothesis is that non-technological productivity growth has reached its peak but technological productivity growth is accelerating, then this is exactly what you'd expect. And you'd expect the mis-measurement issue to take you much further from the mark then you do now.

Saying that mismeasurement has always been an issue completely ignores the fact that how big of an issue it is relative to other factors may have changed.

> Second, the productivity slowdown hasnt been confined to sectors in which output is tough to measure: it has been broadly based

This is just stating the contradiction of the original hypothesis. If mis-measurement is an issue, how do you know that it's broadly based?

> And, finally, many of the benefits that weve reaped from things like smartphones and Google searches have been confined to non-market activities

While true, it's hard to imagine that things like access to Wikipedia, Stack overflow, instant global messaging, etc..Haven't significantly enhanced productivity for a variety of office workers as well. Not to mention automation technologies a la Amazon's Kiva robots and other factory automation tech.

tunesmith 2 hours ago 2 replies      
What sorts of constraints in our daily lives feel blindingly obvious, in terms of wishing for a solution? I mean beyond true life-transition concerns like finding new housing or new jobs.

In the past, I'm sure a clothes-washing machine must have felt pretty obvious even if people weren't sure what it was. It's always been a pain to do laundry and there had to have been a wish for some sort of contraption to make it drastically easier, even before knowing what that contraption would be.

When I think of real constraints in daily life, I can only think of things like - the effort of transition from waking to working, commuting, perhaps the convenience of having nutritious food that improves your health, and... what else?

What sorts of things do you think of when you say, "I wish I could just snap my fingers and..."

imglorp 2 hours ago 1 reply      
I normally take a dim view of economics and this article is worse than usual. My complaint is the massless, frictionless world of economics does not take real world forces into account.

> technological advancement just aint what it used to be.

We're about to see millions of human roles vaporize when automated driving hits home. This isn't like the dishwasher or desktop calculator. We've already seen self checkout stations. How long until whole stores are automated? Eg: http://newatlas.com/go/5028

> Many productivity-enhancing new technologies are capital goodsthink back to the moving truck.

Bullshit as well. Hardware is rapidly approaching free as services dominate. Plus most of our readers shift capex to opex by cloud hosting, for example. Furthermore, offshoring and free trade means you won't invest in equipment because you're not even making the widgets any more, in house.

> Look at what an ideal kitchen looked like in 1955

Mostly irrelevant when "instant gratification isn't fast enough". Our kids can't even be bothered to microwave something in 30 sec, so prepackaged convenience, dominates households.

Finally, what does corporate malfeasance (eg VW emissions, banking crisis, LIBOR fraud etc) do to productivity?

I think this is a more honest assessment: http://www.businessinsider.com/alan-greenspans-mistake-has-l...

jackcosgrove 1 hour ago 0 replies      
I'll modify Gordon's hypothesis. New technology is no less productive than it used to be. The difference is that new technology requires fewer humans in the loop. That's why productivity is rising but wages are not.
guelo 2 hours ago 0 replies      
Yes, things like Google Search must be increasing productivity. But at the same time the amount of time dedicated to infotech entertainment addiction is going through the roof. I wonder how much productivity the hours of Facebooking at work is costing us.
dredmorbius 45 minutes ago 0 replies      
This is yet another review of Robert Gordon's The Rise and Fall of American Growth. Incidentally, William Nordhaus also just published one.[1]

Of the good: Gordon's book is a tour-de-force of the past 150 years (nearly) of economic progress in the US, and documents impacts on everyday life meticulously and engagingly. I and numerous reviewers, several of whom I strongly suspected to disagree, find the case he makes for a hump-shaped growth curve -- that is an increasing accelleration from 1870 - 1950, followed by a slowing accelleration, though continued growth, from 1950 to present -- compelling.

Gordon also makes a good case for several of the factors contributing to the exceptionally vibrant growth from 1920 - 1950, including especially the stimulus of World War II and the post-war recovery. But also the very fundamental nature of many of the innovations brought online at the time.

His treatments especially of transportation, communications, healthcare, and household life are spectacular. I recommend them highly.

There are some missing elements though.

Gordon never uses the phrase "Maslow's Hierarchy" -- the pyramid of needs that humans must have met, starting with food, clothing, and housing, and extending through safety and security, social engagement, and self-fulfillment. He does address many of these individually, and notes that early innovations tended strongly to address the base elements, and of the importance of security and predictability (a tremendously under-acknowledged failing of post-1970 economic experience) among individuals.

A possible defense of ongoing growth might be made by seeing the present period as one of a consolidation and development of technologies -- say, very large datacenters and broadband backplanes, as well as increasingly capable mobile devices and programming tools for developing them -- which may see some future breakout. Gordon seems in this, and several other areas, particularly incurious.

Gordon's life work is largely focused on GDP measurement, and he leans very heavily on this. His argument is that GDP underaccounts for true improvements in lifestyle, an argument with some merits, though others argue that it overaccounts by failing to take into consideration diseconomies. It's curious that Gordon doesn't explore alternative quality-of-life measures suggested by many contemporary critics of economic orthodoxy.

More fundamentally, in not addressing them, Gordon raises some very profound questions over the fundamental basis of economics. What is wealth? What is value? How is value associated with cost and price? How should costs of renewable vs. nonrenewable resources be considered? Is energy a fundamentally different economic input? What is the relationship of energy to economic growth? What is technology? What is the mechanism, or mechanisms, by which technology does, and doesn't, promote increased productivity? What are the market-mediated impacts of improved productity, particularly as expressed through the Jevons Paradox, Giffen, and Veblen goods? How does one measure quality? How does one measure the total capacity or capability of an economy?

These aren't easy questions. They are, I'm finding as I research economic theory and its history, less ridiculous, and rather more explored, than I'd have thought. There are some exceptionally notable departures and paths taken in economic theory over time, often poorly addressed in the current curriculum.

As with some of the infrastructure issues above, Gordon's marked disinclination to pick up stones, particularly ones on which sacred cows seem perched, strikes me as a singular weakness of his book.

There are other authors, mostly neglected, who've explored this space. Eric D. Beinhocker's 2006 book The Origin of Wealth, Nicholas Georgescu-Roegen's Entropy and the Economic Process, W. Brian Arthur's work on the economy as an evolving complex system, and others. I see the questions and explorations as deeply related.



1. http://www.nybooks.com/articles/2016/08/18/why-economic-grow... Nordhaus is cited in Gordon for his work on economics of both computers and lighting, and addresses advances of lighting (and underaccounting of these in GDP) in his review. It's an interesting exploration of limitations of economic measurement.

Study found bronze medal-winners tended to be happier than silver medalists npr.org
41 points by AWildDHHAppears  3 hours ago   16 comments top 7
spir 2 minutes ago 0 replies      
At Amazon years ago, a director once told me "SDE3s are very happy, SDE1s are just happy to be here, and every SDE2 thinks he should be a SDE3."
keithnz 3 hours ago 2 replies      
1 just missed winning, 1 just missed getting nothing.
alex- 18 minutes ago 0 replies      
I am reminded of a quote "Happiness is living without expectations"

The favorites in a competition will have much higher expectations than the rest.

AndrewKemendo 2 hours ago 2 replies      
Seinfeld encapsulates this phenomenon perfectly:


taneq 1 hour ago 0 replies      
Second is the first loser. Third is the best of the rest.
kevindeasis 3 hours ago 1 reply      
Here's a photo I saw earlier, kinda reinforcing the idea.


KennyCason 1 hour ago 0 replies      
Not going to lie, that silver in JiuJitsu has lingered in my brain for a couple years now :)
Show HN: Give 7B people an instant physical address github.com
333 points by roberdam  13 hours ago   174 comments top 41
mabbo 10 hours ago 3 replies      
Maybe I'm missing something but why does "Pearl" mean 8480 and "Magical" -6129? If that's a hash, I really dislike this, because A: humans can't do hashing in their heads, and B: get one letter wrong and suddenly you don't have the right address. This has also been my big opposition to what3word.

Addresses need to be resilient. If I make a tiny mistake in my address, there should be enough redundant data to help fix the problem. Postal Codes are great for this, because if you get your code wrong, well, it'll probably be one nearby and the delivery service can figure it out. But with hashes? One letter off and you're doomed.

sly010 10 hours ago 3 replies      
As a sidenote, Australia moves about 7cm north annually [1], so any encoding of GPS location is not flexible enough in the long term.

Also only encoding lat/long makes it so you would not be able to address a single floor in a large building.

[1] http://www.bbc.com/news/technology-36912700

jakobegger 10 hours ago 4 replies      
What makes this better than actual coordinates?

I can easily find numeric coordinates on a map, without any special tables or computations.

I can immediately see if two numeric coordinates are close just by looking at the numbers.

Very many people understand how to use standard numeric coordinates.

et-al 6 hours ago 1 reply      
Wow, this is a much friendlier alternative to what3words / what3fucks!

Few things that I felt could be potential issues:

1) GPS coordinates are great for approximations for areas without clear markers, but in built up urban areas, they don't work so well. As a stranger to a new city, I can fumble around and ask someone, "Excuse, can you help me find 222 Hyde Street?" And they'll hopefully point me in the right direction, and I'll wander wander over until I hit Hyde Street and try heading north and south depending on the numbers.

Secondly, if we hash the geocode, then we lose that sense of relativity. I no longer know if "Pearl" is east or west of "Tiger" and don't know which direction to move in.

In addition, there's still the final mile (or rather 3 meter) issue. When I'm at a particular lat/long, am I at the right spot? Or is my GPS off, or did the person put the pin just slightly off? Using traditionally addressing, when I have a house number, I know it's the right house because it says so on the door.

2) The icons used for error-checking may not be universally known. For example, 83.png is a hat, but for someone unfamiliar with this, it's.. a horn? a bell? I know this was an MVP to show (and it's great!), but it is something to keep in mind.

Ultimately, I think this has potential for the delivery industry (before drivers get replaced by drones/driveless cars, of course). It's already more intuitive than the chiban-style addressing system in Japan [0]. However, similar to that system, we're still reliant on book/web site/app to decode the address. And so here, we're assuming people will readily have access to a smartphone with a reliable GPS.

[0] https://en.wikipedia.org/wiki/Japanese_addressing_system

diskooo 9 hours ago 1 reply      
Isn't this pretty much the same concept as http://what3words.com/, or am I mistaken?
jpalomaki 8 hours ago 1 reply      
I see the point especially with foreign addresses. It can be bit of a challenge to communicate addresses like "Nndorfejrvri t 4" or "Kskl Byk amlca Cd. No:9" over the phone. Coordinates are of course alternative, but for most of us, few words and number are easier to keep in mind for few minutes.

The given notation is hard to decode inside the head, but if you can memorize it for 30 seconds then you can fire up an app to do the decoding.

Pictogram is clever idea for the checksum. Probably something that sticks to your head. Reminds me of the Lotus Notes login dialog [1], [2]

[1] https://blog.codinghorror.com/the-dramatic-password-reveal/[2] http://www.tenable.com/pvs-plugins/1305

SamBam 8 hours ago 1 reply      
I don't see any versioning on this, and that seems like it is something that needs to be implemented ASAP if you actually want any of these addresses to start to be used in the wild.

If you make any changes to your word lists (in response to, say, all the comments here that say you should be omitting rude words), your old addresses won't be valid.

If someone gives me the address "Afraid Anus" (to pick an example someone found below), but I'm using a new version of your word list that omits "Anus," how am I supposed to know which version of the word lists I should be looking at?

It's all well and good to say that it will be forever backwards-compatible, but what if people find real flaws with the system?

(Another potential flaw: are homophones excluded? What if I'm giving the address over the phone?)

niftich 7 hours ago 1 reply      
At first I was skeptical of 'yet another geocode', but the addition of the avatar alongside it (for error-checking and instant-recognizability purposes) makes this scheme intriguing, and in my opinion, far better than any other geocoding.

It's essentially a combination of an Identicon [1] (of sorts, but with actual clip-art) and hash wordlist, but transmitted together as a unit. This is a very good idea and exploits a lot better how humans actually memorize and recognize things.

It needs a better wordlist though. Better is subjective, but the current list is very anglocentric, not very distinctive, and laced with profanity, making it awkward for interactions between strangers.

[1] https://en.wikipedia.org/wiki/Identicon

goodmachine 10 hours ago 0 replies      
AFRAID SEA here. To avoid confusion, might be best to exclude words that are common placenames from the word list (street, road, lake) as well as rude words.

Great to see an open alternative to http://what3words.com (and I also love the icon as visual hash.)

aw3c2 11 hours ago 1 reply      
You missed http://what2numbers.org/ and http://www.what3fucks.com/ in the comparison.
jxf 9 hours ago 1 reply      
I think this is a neat way of encoding things. But this also has a lot of shortcomings that existing mailing addresses don't have; it's really only useful for identifying a physical spot, which is what a lat/long and geocodings of mailing addresses will do anyway.


* What if you live at an apartment?

* What if your address doesn't correspond to a fixed physical location, like an APO?

* Aren't the city and country redundant if you have a physical lat/long location?

Still, we need to look at how we send information to our fellow humans. Addresses are important and I'm glad folks like the OP are taking a closer look.

dalbin 11 hours ago 1 reply      
Love the concept, but P2 code depends on the country, I think it should not be resilient on geopolitical address. For example, in France, states have changed recently (change in name and merging).
ComodoHacker 11 hours ago 2 replies      
Yet another location encoding. Why not MapCode[1]?

>Designed to be used in low tech environments

What are supposed uses of encoded lattitude and longitude in "low tech environments", other than simple transfer?

1. https://en.wikipedia.org/wiki/MapCode

acangiano 10 hours ago 2 replies      
Great idea / initiative but I must ask, are you new to Ruby? I've rarely come across code that was so not idiomatic.
codev 11 hours ago 10 replies      
My current location is 5308 AFRAID ANUS. Might be best to exclude ANUS from the word list.
pingec 10 hours ago 1 reply      
This is genius! Seems like HN killed it though. The demo is not working anymore, probably rate limits have been hit.

Edit: seems to work on the default location but not when I move the maps to Slovenia.

wscott 12 hours ago 1 reply      
It is nicely done and unlike what3words.com (which is very similar) this version isn't commercial.

The icon acting as a visual hash is a nice touch.

datenwolf 6 hours ago 0 replies      
IMHO the Maidenhead Locator System [1] solves this problem very nicely. Locations are concise and can be specified with arbitrary precision. And the encoding scheme makes vocal transmission quite robust.

1: https://en.wikipedia.org/wiki/Maidenhead_Locator_System

sly010 10 hours ago 2 replies      
This is probably a first world problem, but I wouldn't mind something similar for the western world.Some sort of extended zip code that I can just enter on any device in a second and it represents my entire fully and correctly defined location including door number, floor, gps coordinates and crossing streets and if it's left or right side of a one way street, etc. Just a ~12 digit number for every single door (and maybe window - for drone delivery) in the world. It would probably have a positive impact on the economy.
yladiz 10 hours ago 0 replies      
I think this is a really nice idea, and it seems somewhat more memorizable than a what3words address because it incorporates numbers. Really cool work! The only two complaints I have are 1) as others in this submission have said, is that the list should really be curated, or at least be filtered to remove "bad words", and 2) ideally this algorithm could be tweaked to not require a city to go along with it, so it becomes easier for a place that hasn't been able to designate cities can use it more easily.
king_magic 11 hours ago 1 reply      
This is really clever. It would be incredible if this took off and became a standard way to represent addresses.
joekinley 12 hours ago 2 replies      
The site doesn't work for me as the Google Map API seems to be broken, Authorization removed or something
swehner 6 hours ago 0 replies      
From the list of words at https://github.com/roberdam/Xaddress/blob/master/en/adj_en.c..., I take it that some locations will be labelled DESPERATE, UNSPEAKABLE, POSTDOCTORAL, OBSTETRICAL, GENEALOGICAL, ASSASSINATING,CORRUPTED, PROGESTERONE, etc. etc., also RUSSIAN, UKRAINIAN, DUTCH, ...etc. Where did that list come from?
trekking101 10 hours ago 2 replies      
But why? Forget about the tech. What does this accomplish? What3words is nonsensical for the same reasons- Amazon Prime is not coming to Mongolia anytime soon and it has a lot more to do with little disposable income/economics than logistics.
austinlyons 4 hours ago 0 replies      
Address in South Austin, TX: "Grounded Bastard 1780"


cgsmith 11 hours ago 1 reply      
First, you have to teach long division and long multiplication to your 7B people. Like the concept and idea and truly hope for adoption.

Is it possible to reduce to addition and subtraction? I feel that multiplication and division is a high barrier for most.

Ocha 11 hours ago 2 replies      
uobytx 3 hours ago 0 replies      
I poked around and found: 1861 TIME WHORINGNifty concept, could use some work.
hasenj 7 hours ago 1 reply      
Resembles a normal address? In the Anglosphere, maybe.

Have you seen how addresses look like in other countries?

For example, in some countries, the address is broken down into several components:

City, District, Block, Street, Building

gjem97 12 hours ago 1 reply      
Love this. I couldn't see any deeper explanation of the encode/decode algorithm. Is that written up somewhere?
sharemywin 11 hours ago 1 reply      
I got a oops something went wrong while clicking "try it now". using chrome on a desktop.
jspekken 12 hours ago 1 reply      
Is there any reason the code is written half Spanish half English? As I'm from the Netherlands my understanding of the English language is great but because this is in Spanish I have to use Google translate to read te code.
losteverything 10 hours ago 0 replies      
Isn't this a solution waiting for a (sponsored) problem?

Zone Improvement Plan solved a problem and where would shipping be without it?

JoshGlazebrook 9 hours ago 0 replies      
I was so confident you were using particles.js, but turns out you coded that yourself!
jbb555 9 hours ago 2 replies      
Very nice. But nobody is going to use it as it's solving a non problem. Nobody has any issue giving people an address.
saalweachter 4 hours ago 1 reply      
My biggest problem with this sort of schema (and the similar ones other people have mentioned in this thread) is that it doesn't solve any of the difficult problems an addressing system needs to solve.

At best, it's just a "maybe easier to remember than GPS coordinates" system -- but remembering GPS coordinates with this level of precision is not really an insurmountable problem. I know its hard to remember, but up until a decade or so ago, people used to remember dozens of phone numbers, which requires a similar level of memorization.

It doesn't make addresses any easier to find than GPS coordinates -- you pretty much need a GPS. It doesn't necessarily guarantee unique addresses -- you get a different address if you pick different points on the same building or property.

It doesn't encode any routing information. So you're a delivery company asked to deliver a package to 123 Wascally Wabbit. You can convert that to a GPS coordinate easily enough, sure. But then what? Which delivery truck do you put the package on out of which delivery center? You can easily compute which delivery center is physically closest to an address, but that doesn't tell you if there is a natural feature like a river or mountain between the two points. On the other hand, USPS addresses encode the routing information -- the third line is the name of the post office that the delivery vehicle for that address departs from.

It doesn't encode any navigation information. How do you get to the address once you're in the general area? Which road gets you there? If you're a helicopter or a drone or a crow, you can just fly straight to the GPS coordinates, sure. But if you're walking or driving, as most people will be, you've just re-invented the sport of orienteering. This is the problem 911 emergency addresses were intended to solve -- you have a street name and a number for every address, so if you need to find an address in a hurry, you just navigate to the street (which is an easy amount of local knowledge to learn), and then you have a linear ordering of addresses so that you know when you've passed the address you're looking for.

It doesn't encode any service area type information. You want to order a pizza -- does the parlor deliver to 123 Wascally Wabbit? You don't know, and neither do they. You call 911 for a fire truck or ambulance -- are you served by the Newtown fire department, or the Newburgh fire department? Again, your emergency services need to plot your address on a map and then figure out whose region you're in. They can't just say, "Oh, you're in Plymouth, a suburb of Newburgh," and route your call appropriately.

(Essentially) saying "just use (masked) GPS coordinates" may technically assign addresses to all of those billions of places which aren't on named roads in recognized municipalities, but it doesn't solve it usefully.

csomar 11 hours ago 0 replies      

Not bad.

parennoob 9 hours ago 0 replies      
I have seen a lot of these co-ordinate based alternative address mechanisms (in fact the author gives a list -- https://github.com/roberdam/Xaddress#compare-encodings). But in my opinion, one thing they all ignore or get wrong is establishing some kind of linking mechanism between real world and co-ordinate based addresses. This is a hard problem, but one without which it is not going to be able to gain significant adoption.

How do we solve it? I am not sure. Maybe somehow augmenting regular national postcodes with coordinates would work better than these (kinda like the 4-digit augmentation to US ZIP codes) because those are things you use on a daily or atleast monthly basis, and remember. Something like 98002-MAGICAL PEARL would be more easily remembered, and not get rid of existing information.

jbb555 12 hours ago 2 replies      
The page seems to forget to say what the point of this is.
bumbledraven 6 hours ago 0 replies      
Google's Open Location Code is better. https://github.com/google/open-location-code
losteverything 10 hours ago 1 reply      
Remember, addresses are very personal. They are reflective; they are metallic and have a magnetic quality; they are outwardly expressive; they have a perceived value; they are understandable; they are historic (having a history)
The OpenSSD Project (2011) openssd-project.org
77 points by ashitlerferad  7 hours ago   10 comments top 6
pkaye 3 hours ago 1 reply      
I have been involved with SSD development for many years. The thing is flash memory is no longer a near ideal memory device like RAM. You start to see a lot of the physics behind how the transistors behave and interact. The companies spend lots of time and money characterizing them and developing algorithms that are trade secrets. You may figure it out yourself but by that time that part is already EOL. And if you don't do it right your SSD will be unreliable.
daenney 5 hours ago 0 replies      
This has been around for a couple of years and the wiki is kind of dead, as seems to be most of the project looking at the activity around it? This probably warrants a (2011) update to the title as it seems that's when this project was truly active.
mdip 5 hours ago 1 reply      
Is it possible to buy a reference board like the one described in http://www.openssd-project.org/wiki/Jasmine_OpenSSD_Platform
rdslw 6 hours ago 0 replies      
Only me read it as openbsd project and thought, eee, where is (1996) about first homepage dig up from archives ?
nickpsecurity 6 hours ago 0 replies      
This could be a nice tool for prototyping disk cryptosystems or accelerators for databases.
ProtonMail now the maintainer of OpenPGPjs email encryption library protonmail.com
68 points by edvbld  3 hours ago   10 comments top 2
bigiain 1 hour ago 1 reply      
So my first reaction there was "there goes another non-five eyes (or nine or fourteen eyes) hosted mail service who've just painted a (or another) great big target on themselves to attract even more NSA scrutiny".

(Second reaction was "Crypto in the browser in Javascript _again?_ Didn't was already point out this is 'doing it wrong'?")

mkohlmyr 1 hour ago 0 replies      
Good stuff, paying customer and planning to stay one :) I do wonder though if you are thinking about providing PM as a browser extension based app as well? Would curb some of the issues people always bring up re crypto in browser.
Citizen scientist hunts for kissing bugs statnews.com
44 points by aabaker99  6 hours ago   14 comments top 6
mhurron 4 hours ago 5 replies      

They're also called assassin bugs and are pretty common. If you treat around your house for bugs like ticks, fleas, ants and termites you're probably using an insecticide that will also control these things.

Yes I know ticks aren't bugs, but none of what I listed are true bugs.

And now my skin is itchy.

ktRolster 4 hours ago 0 replies      
Behind all of that knowledge is a basic principle, he said. The closer you look, the weirder it gets.

It's a fractal

pvaldes 3 hours ago 1 reply      
Curious guy. The kind of people able to make things regrowth after their footseps. They are scarce and often elusive.
whyenot 3 hours ago 0 replies      
Just to be clear, there is a test and a treatment for Chagas disease (although not necessarily a cure).
rince 3 hours ago 0 replies      
You know a clear picture of a kissing bug would greatly enhance this article
qntty 5 hours ago 0 replies      
this is terrifying
Type Punning Functions in C evanmiller.org
123 points by heliostatic  9 hours ago   64 comments top 8
kbenson 9 hours ago 3 replies      
Sir, while the gentlemen and ladies of this fine organization are infinitely pleased with your research into the black magic of the nether regions and the treatise you have presented from aforesaid research, we would ask you please be even more strident in your warnings about practicing this subject. It seems a number of neophytes have disappeared under odd circumstances in the weeks since you presented your work, and the interruption to the experiments they were assisting with has become quite burdensome. Additionally, we expect training and recruitment costs to be quite a bit higher than normal next year, and as you well know our coffers are not unlimited.
quotemstr 6 hours ago 1 reply      
Here's an example I'm rather proud of. It involves setting getpid(2) as a signal handler(2) via JNA.


jheriko 1 hour ago 1 reply      
its a shame this only glosses over the subject, and starts with a highly misleading example and then continues with plenty more misleading information.

unix/windows are not interchangeable with compilers targetting those platforms. the article doesn't make the distinction or highlight why it matters - the OS has nothing to do with calling conventions in this context, they are just very conveniently the same in this case.

this example doesn't work with vs2010 out of the box because it targets win32 by default... for example. its the compilers, and only the compilers, which are deciding this. the standard calling convention for the platform is incidental, and it being identical to the default for the compiler is a wise design choice for compilers targetting x64, rather than a necessity - something which is much clearer in the old world of windows and x86 (32-bit) where cdecl was the default out of the box, and stdcall was what windows api calls expected.

pklausler 7 hours ago 7 replies      
If your C compiler doesn't complain bitterly to you about an incompatible pointer type on that assignment, get a better C compiler. If your C compiler does complain bitterly about it, heed those warnings or expect no sympathy.

If you write this in production code, and don't get sent home that day in tears, your company's code review process has failed.

david-given 3 hours ago 1 reply      
Incidentally, the codebase I'm looking at right now contains gems like this:

 /*VARARGS2*/ int margin_printf (fp, a, b, c, d, e, f, g) FILE *fp; char *a; long b, c, d, e, f, g; { ind_printf (0, fp, a, b, c, d, e, f, g); } /* margin_printf */
Called (from a different source file) like this:

 margin_printf (outfile, length ? "/* %s */\n" : "\n", storage);
Okay, so that's K&R C, and it's not actually compiled any more (because I've been slowly taking this stuff out and replacing it with things that actually work), but still --- the horror, the horror...

pavlov 6 hours ago 2 replies      
... we saw how register allocation and calling conventions supposedly the exclusive concern of assembly-spinning compiler writers occasionally pop their heads up in C ...

I don't know about "occasionally". At least on Windows, calling conventions used to be a constant headache. The C language default was cdecl, but the Win32 API default (most of the time) was stdcall. Then there was a variant (fastcall?) that tried to make use of registers.

Maybe it's all fixed now... But this being Win32, I rather expect they've managed to accumulate a few more "interesting" gotchas and edge cases as the platform expanded with 64-bit, WinRT, Universal Windows Platform and whatever.

tathougies 8 hours ago 5 replies      
This is undefined behavior, so you're claim that this is 'in C' is blatantly false. In general, you cannot assume the underlying architecture will pass its arguments in any particular location.
bitwize 7 hours ago 0 replies      
I freaking hate you, OP. I really freaking hate you right now. This is an abomination in the sight of God.
Data Analysis and Visualization Using R (2014) varianceexplained.org
44 points by michaelsbradley  5 hours ago   2 comments top
minimaxir 4 hours ago 1 reply      
These tutorials are from 2014. While they provide a good overview of R syntax, a lot has been added to the R-verse such as dplyr, which the author primarily used for his Trump Tweets blog post yesterday.

If you are interested in learning R, you may want to read the R for Data Science book (http://r4ds.had.co.nz/) book by dplyr (and ggplot2) author Hadley Wickham.

Relatedly, I have my own (slightly more complicated) notebooks using R/dplyr/ggplot2, open-sourced on GitHub, if you want further examples of real-world analysis with publically-available data along the lines of the Trump Tweet analysis:

Processing Stack Overflow Developer data: https://github.com/minimaxir/stack-overflow-survey/blob/mast...

Identifying related Reddit Subreddits: https://github.com/minimaxir/subreddit-related/blob/master/f...

Determining correlation between genders of lead actors of movies on box office revenue: https://github.com/minimaxir/movie-gender/blob/master/movie_...

The Million-Key Question: Investigating the Origins of RSA Public Keys usenix.org
78 points by dc352  9 hours ago   24 comments top 4
misterrobot 1 hour ago 1 reply      
"Although RSA factorization is considered to be an NP- hard problem if keys that fulfil the above conditions are used..."

Isn't this incorrect? The implication would be that quantum computers could solve NP-Complete problems in polynomial time.

dc352 7 hours ago 1 reply      
lisper 7 hours ago 2 replies      
This is just one of many reasons one should switch from RSA to ECC.
PhantomGremlin 2 hours ago 0 replies      
Oh, to be a fly on the wall at Fort Meade.

Given the thousands of employees the NSA has working on all aspects of cryptography, there must be countless examples of this type of investigation. It's integral to traffic analysis and to fingerprinting of cryptosystems.

At least I hope that the NSA does lots of stuff like this. Because if they don't, what does that leave them doing? If the NSA is simply evil and/or incompetent, that's not enough ROI for the US taxpayers.

Unfortunately, NSA work probably remains highly classified for so long that an ex employee would never be able to write about it in technical detail. But I could be wrong? Are there any Inside Baseball books out there revealing the inner workings of NSA spooks?

Image Completion with Deep Learning in TensorFlow bamos.github.io
189 points by semanser  13 hours ago   10 comments top 4
canada_dry 4 hours ago 2 replies      
On a similar vein...


Uses TF deep learning to classify an image.

sforzando 6 hours ago 0 replies      
I was impressed with the Labeled Faces in the Wild (LFW) facial auto-completion results, especially since the system was not trained on LFW at all! The results seemed almost too good to be true. Perhaps this is a testament that there isn't that much diversity in human faces?

Very well written overall, and I appreciated the author's thoughts on TensorFlow+torch at the end of the article.

Adversarial training is a fascinating idea, and I love the sound of it. I'd like to start applying that concept in the future.

michael_h 11 hours ago 2 replies      
Oh man, those facial images are deep in the uncanny valley. Maybe on the rising side of the slope now, but still way down there.
deegles 2 hours ago 0 replies      
Could this be used to generate unique images from a training set?
The Miracle of the Modern Banana nationalgeographic.com
21 points by pmcpinto  3 hours ago   1 comment top
tehwebguy 49 minutes ago 0 replies      
> When people talk about fruit at cocktail parties, my only quibble is something semantic: how people use the word the"as in, when the strawberry arrived in North America, or how the avocado is paralyzing Central American farmers.

Dude goes to some pretty stuffy parties.

For real though this is interesting. Seems logical that as a food we grow consolidates it becomes easier for one biological agent to wipe it out.

The bandwidth bottleneck that is throttling the Internet nature.com
46 points by okket  6 hours ago   28 comments top 6
excalibur 4 hours ago 3 replies      
This seems like as good a place as any to complain that auctioning off radio frequencies to the highest bidder is unlikely to result in optimal (or nearly optimal) utilization of scarce resources, and is therefore working against the public interest.
hyperion2010 1 hour ago 0 replies      
For a fun look at current fundamental limits on bandwidth I found this wikipedia page [0]. Really puts things in perspective.

0. https://en.wikipedia.org/wiki/List_of_device_bit_rates#Bandw...

Animats 1 hour ago 2 replies      
Most bandwidth usage is ads and video. And bloated apps and web pages. What else needs much bandwidth?

Even HDTV only needs about 20mb/s, and can be compressed down to 8mb/s or so without much visible loss.

IAmGarrett 4 hours ago 1 reply      
Google, FB, and Microsoft own cables. So I can see that it's data-intensive companies. What does owning the cable do for them? They get to rent it out to other companies?
di 4 hours ago 2 replies      
> the Internet is still a global patchwork built on top of a century-old telephone system.

No, it's not.

joeblau 4 hours ago 0 replies      
First thing I did was open the page and try to find the word "Comcast".
Microsoft proves backdoor keys are a bad idea theregister.co.uk
229 points by ChuckMcM  9 hours ago   75 comments top 9
bradford 8 hours ago 15 replies      
(disclaimer, MS employee, non-security expert here).

I've read through the article, here, and in other places, and I'm seeing sentiment that this is a big fuck up on Microsoft's part. I might be completely misunderstanding, but I just don't see it.

In order to use the backdoor, you've got to flash firmware, so, you've got to have physical access to the device. If an attacker has physical access to your device, you're already screwed.

So, I don't doubt that the key exists (I had to use it myself when testing RT devices back in the win8 days), but what's the exploit here? Why is it, as the title suggests, a 'bad idea'? Isn't a secure boot policy that can be bypassed with physical access more secure than none?

contextfree 5 hours ago 0 replies      
I'm actually a bit confused about how this is a "golden key" problem (if I understand what that means).

As far as I can tell, the problem here is that there's a signed policy that was intended for newer versions of Windows, but is also interpreted by older versions of Windows as a valid policy with a different meaning. On Win10 1607 it means "under such-and-such conditions, merge these additional rules into the already applied policy" and on older Windows it just means "apply this policy".

But the only key here in both cases is Microsoft's regular signing key. Which I guess could be considered a kind of golden key/backdoor/whatever in itself - just as in the recent Apple vs. FBI standoff you could say the fact that Apple had the technical ability to sign and install a hacked OS was a backdoor to begin with - but that doesn't seem to be what people mean.

daenney 8 hours ago 3 replies      
I genuinely hope this will influence the whole government mandated back door debate for the better but I'm afraid that this will just be forgotten in a matter of minutes.

Like Gove said "we've had enough of experts", especially when their educated opinions don't suit us.

rocky1138 6 hours ago 1 reply      
> The Register understands that this debug-mode policy was accidentally shipped on retail devices, and discovered by curious minds including Slip and MY123.

> The policy was effectively inert and deactivated on these products but present nonetheless.

Whenever I read things like this, I always envision that it's not a cock-up at all, but instead a deliberate effort by righteous free software-minded people who happen to work at Microsoft and are dismayed by the things they're asked to do.

But that is probably because I wish it so.

ruste 5 hours ago 0 replies      
Is this code also used for the Xbox? It would be really cool if we could run linux/bsd easily on one of those.
tryp 8 hours ago 1 reply      
The researchers' writeup, in a very fun form, can be found at https://rol.im/securegoldenkeyboot/

With text as follows for those whom the joviality of the original presentation is undesirable:

irc.rol.im #rtchurch :: https://rol.im/chat/rtchurch

Specific Secure Boot policies, when provisioned, allow for testsigning to beenabled, on any BCD object, including {bootmgr}. This also removes the NT loaderoptions blacklist (AFAIK). (MS16-094 / CVE-2016-3287, and MS16-100 / CVE-2016-3320)

Found by my123 (@never_released) and slipstream (@TheWack0lian)Writeup by slipstream (@TheWack0lian)

First up, "Secure Boot policies". What are they exactly?

As you know, secureboot is a part of the uefi firmware, when enabled, it onlylets stuff run that's signed by a cert in db, and whose hash is not in dbx(revoked).

As you probably also know, there are devices where secure boot can NOT bedisabled by the user (Windows RT, HoloLens, Windows Phone, maybe Surface Hub,and maybe some IoTCore devices if such things actually exist -- not talkingabout the boards themselves which are not locked down at all by default, but enddevices sold that may have secureboot locked on).

But in some cases, the "shape" of secure boot needs to change a bit. For examplein development, engineering, refurbishment, running flightsigned stuff (as ofwin10) etc. How to do that, with devices where secure boot is locked on?

Enter the Secure Boot policy.

It's a file in a binary format that's embedded within an ASN.1 blob, that issigned. It's loaded by bootmgr REALLY early into the windows boot process. Itmust be signed by a certificate in db. It gets loaded from a UEFI variable inthe secureboot namespace (therefore, it can only be touched by boot services).There's a couple .efis signed by MS that can provision such a policy, that is,set the UEFI variable with its contents being the policy.

What can policies do, you ask?

They have two different types of rules. BCD rules, which override settingsin the on-disk BCD, and registry rules, which contain configuration for thepolicy itself, plus configuration for other parts of boot services, etc. Forexample, one registry element was introduced in Windows 10 version 1607'Redstone' which disables certificate expiry checking inside mobilestartup's.ffu flashing (ie, the "lightning bolt" windows phone flasher); and another oneenables mobilestartup's USB mass storage mode. Other interesting registryrules change the shape of Code Integrity, ie, for a certain type of binary,it changes the certificates considered valid for that specific binary.

(Alex Ionescu wrote a blog post that touches on Secure Boot policies. He teased afollowup post that would be all about them, but that never came.)

But, they must be signed by a cert in db. That is to say, Microsoft.

Also, there is such a thing called DeviceID. It's the first 64 bits of a saltedSHA-256 hash, of some UEFI PRNG output. It's used when applying policies onWindows Phone, and on Windows RT (mobilestartup sets it on Phone, andSecureBootDebug.efi when that's launched for the first time on RT). On Phone,the policy must be located in a specific place on EFIESP partition with thefilename including the hex-form of the DeviceID. (With Redstone, this gotchanged to UnlockID, which is set by bootmgr, and is just the raw UEFI PRNGoutput.)

Basically, bootmgr checks the policy when it loads, if it includes a DeviceID,which doesn't match the DeviceID of the device that bootmgr is running on, thepolicy will fail to load.

Any policy that allows for enabling testsigning (MS calls these Retail DeviceUnlock / RDU policies, and to install them is unlocking a device), is supposedto be locked to a DeviceID (UnlockID on Redstone and above). Indeed, I haveseveral policies (signed by the Windows Phone production certificate) likethis, where the only differences are the included DeviceID, and the signature.

If there is no valid policy installed, bootmgr falls back to using a defaultpolicy located in its resources. This policy is the one which blocks enablingtestsigning, etc, using BCD rules.

Now, for Microsoft's screwups.

During the development of Windows 10 v1607 'Redstone', MS added a new type ofsecure boot policy. Namely, "supplemental" policies that are located in theEFIESP partition (rather than in a UEFI variable), and have their settingsmerged in, dependant on conditions (namely, that a certain "activation" policyis also in existance, and has been loaded in).

Redstone's bootmgr.efi loads "legacy" policies (namely, a policy from UEFIvariables) first. At a certain time in redstone dev, it did not do any furtherchecks beyond signature / deviceID checks. (This has now changed, but see howthe change is stupid)After loading the "legacy" policy, or a base policy from EFIESP partition, itthen loads, checks and merges in the supplemental policies.

See the issue here? If not, let me spell it out to you plain and clear.The "supplemental" policy contains new elements, for the merging conditions.These conditions are (well, at one time) unchecked by bootmgr when loading alegacy policy. And bootmgr of win10 v1511 and earlier certainly doesn't knowabout them. To those bootmgrs, it has just loaded in a perfectly valid, signedpolicy.

The "supplemental" policy does NOT contain a DeviceID. And, because they weremeant to be merged into a base policy, they don't contain any BCD rules either,which means that if they are loaded, you can enable testsigning. Not just forwindows (to load unsigned driver, ie rootkit), but for the {bootmgr} elementas well, which allows bootmgr to run what is effectively an unsigned .efi(ie bootkit)!!! (In practise, the .efi file must be signed, but it can beself-signed) You can see how this is very bad!! A backdoor, which MS put in to secure boot because they decided to not let the user turn it off incertain devices, allows for secure boot to be disabled everywhere!

You can see the irony. Also the irony in that MS themselves provided us severalnice "golden keys" (as the FBI would say ;) for us to use for that purpose :)

About the FBI: are you reading this? If you are, then this is a perfect realworld example about why your idea of backdooring cryptosystems with a "securegolden key" is very bad! Smarter people than me have been telling this to youfor so long, it seems you have your fingers in your ears. You seriously don'tunderstand still? Microsoft implemented a "secure golden key" system. And thegolden keys got released from MS own stupidity. Now, what happens if you telleveryone to make a "secure golden key" system? Hopefully you can add 2+2...

Anyway, enough about that little rant, wanted to add that to a writeup eversince this stuff was found ;)

Anyway, MS's first patch attempt. I say "attempt" because it surely doesn't doanything useful. It blacklists (in boot.stl), most (not all!) of the policies.Now, about boot.stl. It's a file that gets cloned to a UEFI variable only bootservices can touch, and only when the boot.stl signing time is later than thetime this UEFI variable was set.However, this is done AFTER a secure boot policy gets loaded. Redstone'sbootmgr has extra code to use the boot.stl in the UEFI variable to checkpolicy revocation, but the bootmgrs of TH2 and earlier does NOT have suchcode.So, an attacker can just replace a later bootmgr with an earlier one.

Another thing: I saw some additional code in the load-legacy-policy function inredstone 14381.rs1_release. Code that wasn't there in 14361. Code thatspecifically checked the policy being loaded for an element that meant this wasa supplemental policy, and erroring out if so. So, if a system is runningWindows 10 version 1607 or above, an attacker MUST replace bootmgr withan earlier one.

On August 9th, 2016, another patch came about, this one was given the designationMS16-100 and CVE-2016-3320. This one updates dbx. The advisory says it revokesbootmgrs. The dbx update seems to add these SHA256 hashes (unless I screwed upmy parsing):<snip>

I checked the hash in the signature of several bootmgrs of severalarchitectures against this list, and found no matches. So either thisrevokes many "obscure" bootmgrs and bootmgfws, or I'm checking the wrong hash.

Either way, it'd be impossible in practise for MS to revoke every bootmgrearlier than a certain point, as they'd break install media, recovery partitions,backups, etc.

- RoL

disclosure timeline:~march-april 2016 - found initial policy, contacted MSRC~april 2016 - MSRC reply: wontfix, started analysis and reversing, working on almost-silent (3 reboots needed) PoC for possible emfcamp demonstration~june-july 2016 - MSRC reply again, finally realising: bug bounty awardedjuly 2016 - initial fix - fix analysed, deemed inadequate. reversed later rs1 bootmgr, noticed additional inadequate mitigationaugust 2016 - mini-talk about the issue at emfcamp, second fix, full writeup release

credits:my123 (@never_released) -- found initial policy set, tested on surface rtslipstream (@TheWack0lian) -- analysis of policies, reversing bootmgr/ mobilestartup/etc, found even more policies, this writeup.

tiny-tro credits:code and design: slipstream/RoLawesome chiptune: bzl/cRO <3

45h34jh53k4j 6 hours ago 0 replies      
So from this I think you could say this is a universal microsoft secureboot implementation bypass. all you need is the signed policy file and an older more obscure (signed) non-blacklisted bootmgr and you can exploit secureboot to glory.

its almost certainly going to be used for malware - a return of bootkits for invisibility/persistence?

microsoft will have to keep revoking older bootmgr's as they find them in jailbreak utils and bootkit malware. eventually they will run out, but for now, busted.

tempting to go buy some winRT devices for linux!

allendoerfer 5 hours ago 1 reply      
I think people should be able to sue companies that do this. They surely did not advertise it as "secure unless we lose the key". Having a backdoor in the first place could be counted as negligent (should be counted as outright fraud).
lawnchair_larry 6 hours ago 3 replies      
What does leaking your private key have to do with backdoor keys? Isn't this like saying that CAs are backdoored because somewhere there exists a private key for those certs?
FreeBSD Core statement on recent freebsd-update and related vulnerabilities freebsd.org
115 points by werid  11 hours ago   24 comments top 2
tptacek 9 hours ago 6 replies      
Every part of this statement is alarming. Had the statement not been made at all, and all I knew was that the FreeBSD update system had some vulnerabilities, I'd be left with a higher opinion of FreeBSD.

1. If there's a public exploit for a vulnerability, you disclose it to users, full stop. This is obvious.

2. If the steps you might normally take to remediate a vulnerability are themselves exploitable, you print that in bold letters in the announcement, full stop. "Requires active MITM" is just another way to say "requires real attacker".

3. You don't leave memory corruption vulnerabilities in software to preserve backwards compatibility. It is better to break software briefly than to leave memory corruption vulnerabilities in it.

All three of these FreeBSD statements are admissions not only of mistakes in the announcement process, but of broken principles as well. Yikes.

y0ghur7_xxx 10 hours ago 3 replies      
"The Security Advisory did not contain information on the theoretical implications of the vulnerability. A more explicit paragraph in the 'Impact' statement may have been warranted."

I may be overreacting on this, but this sounds like "you who found bugs in our code: document them better next time". I think the reporter does not owe freebsd anything. If someone owes someone else something, freebsd developers should thank the reporter for finding those vulnerabilities, and not ask for even more of him.

Why scaling and parallelism remain hard even with new tools and languages erlang-solutions.com
104 points by andradinu  11 hours ago   36 comments top 5
jondubois 3 hours ago 1 reply      
Highly scalable systems have to be designed in a particular way; it's an architectural concern - Your choice of language might make it somewhat easier to implement a highly parallel architecture, but the language itself will never make it 'easy' - Languages will always give you just enough rope to hang yourself (that is the cost of flexibility).

You could design a framework which is extendable and scalable in such a way that developers who want to write code on top of that framework don't need to think (much) about parallelism (see https://github.com/socketcluster/socketcluster#introducing-s...).

Unfortunately, you cannot build a highly scalable system without enforcing some rigid constraints. General purpose programming languages do not enforce restrictions on what design patterns you can or cannot use use; that is the role of a framework.

That said, frameworks can never fully hide the complexity of parallel systems (not for all use cases) but at least they can guide you to the best approach when solving specific problems.

rdtsc 7 hours ago 1 reply      
That's why I like Erlang and Elixir. They were built to handle concurrency down to the core. Currently only languages on BEAM VM provide a set of mature, built-in fault tolerance features (code reloading, isolated concurrency units -- processes, immutable data, sending messages between processes instead of acquiring lock).

I often see some frameworks which claim implement Erlang in "language $x" by adding a queue to a thread. But that is still very much behind what Erlang does, because it is missing these other components, namely fault tolerance.

Sure you can spawn OS processes, or spin up multiple machines/containers. But that is not built-in, so have to manage the additional stack for that. Java has code reloading, but it is not quite the same and so on.

And this is not just talk in generalities, these properties translate directly to benefits and profits -- faster development time, less ops overhead (some parts of the backend can crash and restart, without having to wake everyone up), less code to maintain and dependencies to manage.

Just the other day had a typical distributed systems problem -- didn't add backpressure and so messages were piling up in receiver mailbox. The simplest solution I tested was just switch a gen_server:cast to a gen_server:call. It was a 2-3 line change. Hotpatched on a test cluster and problem was fixed in a few minutes. Ultimately I did something else, but the point is using a language which was built in for concurrency is that it was just a couple of lines of code. Had that been a custom RPC solution, with some serialization and some socket code, it would have taken a lot more to write, test and deploy. All that adds up quickly and can make or break the project.

RonanTheGrey 7 hours ago 0 replies      
A big part of the issue is that distributed computing is hard to think about, for the same reasons that multithreading are hard to think about: the single threaded model (either on a local machine or amongst a set of machines via something like RPC) is much easier to think about most of the time, even if it's the wrong overall solution. Conway's Law only kicks in when organizational issues FORCE distribution, and people are forced to think in a distributed way.

It's an entirely different way of thinking about a problem, based entirely on giving up control.

IgorPartola 9 hours ago 2 replies      
Because it is not a tool or a language problem but a network reliability problem?
njharman 8 hours ago 4 replies      
Is it really that hard? Tons of companies do it. LAMP stack has been scalable for couple decades or so. There are platforms and services that you can rent/buy if you don't wanna role your own. I can get a free databricks account and go to town with pyspark there's even free courses on it.

Threading is hard and I'm sure will always be hard. But threading is the wrong way to scale or parallelise.

Interactive Sudoku Zero-Knowledge Proof manishearth.github.io
34 points by Manishearth  7 hours ago   4 comments top
dewitt 4 hours ago 2 replies      
I don't know the math behind ZKP, but how does this statement work:

> If Peggy didnt have a solution, there was a chance shed be caught in this round if Victor had asked for the right set of 9 squares to be revealed.

Doesn't this technique make it a "slightly more than zero knowledge proof", where the amount of knowledge revealed correlates to the probability a proof was obtained? The more knowledge, the more proof?

Don Batemans terrain mapping device bloomberg.com
84 points by forrest_t  11 hours ago   25 comments top 6
outworlder 10 minutes ago 0 replies      
> Soliday had more success at United. The airline agreed to help Batemans team test it so it could be certified by the FAA, he said. Most other carriers balked. It took another high-profile fatal crash to change their minds.

It's amazing how much blood is required to grease the gears of the world's bureoucracy...

jaynos 9 hours ago 2 replies      
>Bateman was always fascinated with airplane crashes. As an 8-year-old school boy in 1940 in Saskatoon, Canada, he and a friend sneaked out of class after two military planes collided and crashed nearby. As punishment, his teacher made him write a report on what happened.

A much more appropriate punishment than detention.

E6300 7 hours ago 2 replies      
Is it just me, or does the crashes data look really noisy? Yes, since 2001 the maxima are clearly lower, but in the period 1974-2001 I don't see a lot of difference with the period 1950-1974.
watersb 8 hours ago 0 replies      
Terrain map. HITS (highway in the sky) based on current fuel, aircraft flight characteristics, weather.

On an iPad.


davidw 8 hours ago 0 replies      
"Extremely rapid deceleration" ? Title's a bit off, but it's a nice article about a guy making the world a better place.
mfringel 9 hours ago 2 replies      
Please include "Crashing Into Mountains" into the subject line. It's a bit clickbaity right now.
Algorithms that select which algorithm should play which game togelius.blogspot.com
39 points by togelius  8 hours ago   11 comments top 8
iandanforth 4 hours ago 2 replies      
While interesting, this research is not on the path to "true" artificial intelligence. It may solve playing a majority of video games given enough effort, but that is not the author's stated goal. Why do I think this?

The representations of the various algorithms are not unified. The answer to the question "what algorithm lies halfway between JinJerry and YOLOBOT" is difficult to answer. This is because they are discrete solvers who's implementations cannot be seamlessly blended. It is only at the level of their decision outputs that they share a common language.

The strength of natural intelligence derive partly from the fact that all strategies are implemented using the same components. Strategies can be described as sparse activation patterns of a neural substrate. Any strategy can be added to, subtracted from, or otherwise combined with another because the activations have a common representation language. The choice is never "either/or" for natural intelligence but "how much of which?"

This problem is also found in the meta-selection problem. The decision tree used to choose a component algorithm (even if applied repeatedly throughout the course of a game) doesn't allow for strategic blending, and itself is not implemented in a language common to the strategies.

In contrast, biological strategies are selected by competition between signals from many lower level systems and top down control and predictions. Ultimately a decision such as "fight or flight" comes down to how strongly a sparse set of neurons if firing and which motor paths are suppressed and which are activated. Because both the strategies and the evaluation of those strategies are implemented in a common substrate, you can blend, adapt, compare and update all aspects of the system. This is crucial to the adaptability and speed of natural intelligence.

Edit: I should note I am in full agreement with the author that simulated environments (of which video games are an example) are going to be essential for the creation of "true" AI. My comments relate specifically to the line of inquiry in the papers described.

daveguy 2 hours ago 0 replies      
On the grid graph comparison of how different algorithms perform on different games, two questions:

1) What is the source data for that plot?

2) You specify "lighter = better", but how are they normalized across games and algorithms? How is better and worse quantified to get a "lightness"?

Edit: Found #2 in the second paper. Still don't know what 25 wins is white and 0 wins is black means? How do you "win" some of these?

Two papers are here:



mikek 4 hours ago 0 replies      
For those of you interested in this general area, I suggest taking a look at General Game Playing. https://en.wikipedia.org/wiki/General_game_playing

Disclaimer: My PhD advisor was the originator of this idea.

daveguy 3 hours ago 0 replies      
This is a great post. (EDIT: Great Blog all around!). The grid of how different algorithms perform is particularly nice. I would love to see how that grid with new algorithms (specifically those new algorithms that make use of multiple algorithms).

Also, the name hyper-heuristics sounds like a recently made up term to stand out in search queries. The more long established name for it has been meta-heuristics (which is included as a keyword in the paper linked as "hyper-heuristics"). Meta because they are heuristics of algorithm selection rather than heuristics of solving a specific problem and heuristics because, as iandanforth mentioned, this is not a low dimension continuous differentiable problem space that lends itself to an optimal solution.

oslavonik 3 hours ago 0 replies      
Great post.

OT: I accidentally swipe left/right every time I'm on blogspot, taking me to the previous/next post. Maybe I'm just fat-fingering all around, but this is a horrible UX.

mccourt 7 hours ago 0 replies      
Great article and great paper. Similar set of questions that get asked as part of AutoML, including the idea of hyper-heuristics, but in the AI community instead of data science. Thanks for the insights.
fitzwatermellow 8 hours ago 1 reply      
Link to paper needs to be fixed.

Thanks togelius for the survey of the current state of the art around GVGAI! It will be interesting to see how adversarial methods influence future game design. Is a perfectly "instinctual" game possible? One that can never be beaten by any machine but that even a two-year old can master immediately?

nickpsecurity 6 hours ago 0 replies      
Glad you submitted this as I've needed an update of game AI field. A bit different than it was in my day but still shows what I argued all along: hybrid methods will be the best. Far game generation, I agree that generating many new, unpredictable experiences for the algorithm is ideal. I also agree it's within computational reach. Doesn't necessarily take AI, though.

For instance, much of that could be done with declarative, dynamic programming with templates, aspects, or constraint solvers. Idea being you create objects with relationships, attributes, and constraints. Let's look at an item object. It might have physical properties such as shape, size, movement speed, acceleration speed, ability to phase in/out, invisibility, and area of effect. It might have reactive properties where any of that changes in a specific way upon physical interaction, game event, or global setting. It might have effects on players that changing any existing attribute's value, remove one, or add one. It might cause a pre-registered event in game or for player with specific or random values input. Any of this can be programed using available languages as taking inputs, performing a computation on them, and producing output. Declarative aspect means that, after each is created with constraints or types, inference algorithms (even if-then's if you're masochist) can produce an imperative implementation that lines them all up properly for an actual game.

I'm not sure what the difficulty will be to do such things for an entire game rather than one item. However, one item on a map interacting with global & one, player's state trying to do all of the above might make a nice testbed. Then a number of items. Then a number of players. Whatever method works easily to solve it and generate efficient code gets used when approach is expanded to apply a similar range of attributes to player, map, NPC's, or even passage of time itself.

Just some thoughts on that. Haven't read your papers about specific algorithms yet. Did download Panorama for later reading. So, is anything close to what I described for generating a game engine from descriptions of parts in a way I described items? I think that, once enough stuff was loaded in, would generate a combinatorial explosion like wannabe AI's have never seen (and couldn't cheat easily).

       cached 11 August 2016 01:02:01 GMT