hacker news with inline top comments    .. more ..    7 Sep 2017 News
home   ask   best   12 months ago   
Nginx Unit nginx.com
120 points by tomd  1 hour ago   38 comments top 17
mmahemoff 38 minutes ago 0 replies      
Confusing description. After seeing the Github README (https://github.com/nginx/unit#integration-with-nginx), it looks to be Nginx's alternative to low-level, language-specific, app servers, e.g. PHP-FPM or Rack, with the benefit that a single Unit process can support multiple languages via its dynamic module architecture, similar to Nginx web server's dynamic modules.

It's still intended to run behind Nginx web server (or some other web server), much like you'd run something like PHP-FPM behind a web server.

skrebbel 25 minutes ago 1 reply      
This looks pretty cool, and makes me sad that Mongrel2 never became popular. In short: Mongrel2 solves the same problem, but does it by letting your application handle requests and websocket connections over ZeroMQ instead of eg FastCGI.

I guess it lost momentum when ZeroMQ did. Anyone know why? Sounds like a dream solution in the current microservice hype.


pilif 22 minutes ago 0 replies      
> It is not recommended to expose unsecure Unit API

why do people always use "not recommended" when they actually mean "do not ever do this or you'll end up the laughing stock in the tech press"

Exposing this otherwise awesome API to the public will amount to a free RCE for everybody. So not ever expose this to the public, not even behind some authentication.

It's very cool that by design it's only listening on a domain socket. Don't add a proxy in front of this.

bluetech 10 minutes ago 0 replies      
I'm happy to see this. nginx itself is excellent software, I'll be happy to use similar tech for the application server as well (instead of uwsgi).

There are a couple of options I'd like to see added to the Python configuration though before I could try it:

- Ability to point it at a virtualenv.

- Ability to set environment variables for the application.

chatmasta 22 minutes ago 1 reply      
I'm having a hard time seeing what niche this fills. It seems to be both a process manager and TCP proxy. What am I missing here? What makes this better than, for example, using docker-compose?

I think a "how it works" or "design doc" would be really helpful.

That said, the source files do make for pleasant reading. The nginx team has always set a strong example for what good C programming looks like.

EDIT: Their blog post [0] makes this more clear... nginx unit is one of four parts in their new "nginx application platform" [1]

[0] https://www.nginx.com/blog/introducing-nginx-application-pla...

[1] https://www.nginx.com/products/

svennek 47 minutes ago 1 reply      
So it looks like they basically rewrote uwsgi and slapped a rest api on top of it.. (as a big fan of uwsgi, that seems like a reasonable thing to do...)
yeukhon 34 minutes ago 1 reply      
It is in beta, but I hope this won't become a commerical-only product.
foota 1 hour ago 1 reply      
This would take the place of something like tomcat or uwsgi, right?
eeZah7Ux 20 minutes ago 2 replies      
[honest question, not being negative] what real use-case is not already being addressed by existing technologies?
amouat 1 hour ago 1 reply      
Also note github repo at https://github.com/nginx/unit
marktam264 5 minutes ago 0 replies      
Is this like AWS Lambda you could put in your own cloud?
smegel 15 minutes ago 0 replies      
> Full support for Go, PHP, and Python;

Does it do WSGI then? Did they write the equivalent of mod_wsgi?

Antwan 1 hour ago 0 replies      
Any report of the perf (VS uWSGI for example) ?
hathym 42 minutes ago 0 replies      
how does it compare to openresty/luajit ?
baybal2 1 hour ago 1 reply      
Looks to be a good candidate to replace omnipresent nginx based API routers
argsno 33 minutes ago 0 replies      
So, it's an application server?
reificator 1 hour ago 0 replies      
The concept of XUnit is so ingrained in my head that I assumed it was a unit testing framework for NGINX.

The rest of the headline cleared it up of course, but I was curious for a minute how that would look.

EDIT: When discussing a new product, I would think the name is a fair point of discussion.

Furthermore after this thread's title changed, it now requires a clickthrough to dispel similar misunderstandings.

IPv10 ietf.org
26 points by Aissen  1 hour ago   2 comments top
bertolo1988 4 minutes ago 1 reply      
Why not IPv16 and fix the problem for ever?
Optimizing web servers for high throughput and low latency dropbox.com
849 points by nuriaion  16 hours ago   69 comments top 23
mfonda 8 hours ago 0 replies      
Great post! Contents of it aside, I very much like the disclaimer:

> In this post well be discussing lots of ways to tune web servers and proxies. Please do not cargo-cult them. For the sake of the scientific method, apply them one-by-one, measure their effect, and decide whether they are indeed useful in your environment.

Far to often I see people apply ideas from posts they've read or talks they've seen without stopping to think whether or not it makes sense in the context they're applying it. Always think about context, and measure to make sure it actually works!

jbergstroem 14 hours ago 2 replies      
I thoroughly appreciate how the majority of the article doesn't even go into the nginx config whereas most of the internet would discuss a result of the search query "optimized nginx config". Much [for me] to learn and much appreciated, Alexey.
indescions_2017 10 hours ago 0 replies      
Great write-up. And even if you use standard instances there is plenty to optimize. Kudos to Dropbox, Netflix, Cloudflare and everyone else for who demonstrates this level of transparency.

And just for reference, AWS does provide enhanced networking capabilities on VPC:


mixedbit 1 hour ago 1 reply      
Does the physical location of a server matter for high-throughput use cases? If a client is downloading large files using all its available bandwith, is the download time noticeably better if the server is close to the client?
DamonHD 16 hours ago 5 replies      
Fab tour de force! Great!

(Though I've been optimising my tiny low-traffic site on a solar-powered RPi 2 to apparently outperform a major CDN, at least on POHTTP...)

Upvoter33 15 hours ago 0 replies      
This is a wonderful article. It also is indicative of why people who really understand systems will always be so employable - it's just so hard to make things run well.
excitom 15 hours ago 2 replies      
This is a great article and I've bookmarked it for future reference.

I observe, though, that if you are tuning a system to this level of detail you likely have a number of web servers behind a load balancer. To be complete, the discussion should include optimization of interactions with the load balancer, e.g. where to terminate https, etc.

stuxnet78 1 hour ago 0 replies      
Interesting article. Well, to be honest, some of the concepts were totally new to me and learning from this article found it interesting. And thanks for the other links too.
exikyut 11 hours ago 2 replies      
I have a related question that most would probably consider relevant but which this article (quite rightly) doesn't answer (as it's not relevant for Netflix).

Let's say I want to prepare a server to respond quickly to HTTP requests, from all over the world.

How do I optimize where I put it?

Generally there are three ways I can tackle this:

1. I configure/install my own server somewhere

2. Or rent a preconfigured dedicated server I can only do so much with

3. I rent Xen/KVM on a hopefully not-overcrowded/oversold host

Obviously the 1st is the most expensive (I must own my own hardware; failures mean a trip to the DC or smart hands), the 2nd will remove some flexibility, and the 3rd will impose the most restrictions but be the cheapest.

For reference, knowing how to pick a good network (#1) would be interesting to learn about. I've always been curious about that, although I don't exactly have anything to rack right now. Are there any physical locations in the world that will offer the lowest latency to the highest number of users? Do some providers have connections to better backbones? Etc.

#2 is not impossible - https://cc.delimiter.com/cart/dedicated-servers/&step=0 currently lists a HP SL170s with dual L5360s, 24GB, 2TB and 20TB bandwidth @ 1Gbit for $50/mo. It's cool to know this kind of thing exists. But I don't know how good Delimiter's network(s) is/are (this is in Atlanta FWIW).

#3 is what I'm the most interested in at this point, although this option does present the biggest challenge. Overselling is a tricky proposition.

Hosting seems to be typically sold on the basis of how fast `dd` finishes (which is an atrocious and utterly wrong benchmark - most tests dd /dev/zero to a disk file, which will go through the disk cache). Not many people seem to setup a tuned Web server and then run ab or httperf on it from a remote with known-excellent networking. That's incredibly sad!

Handling gaming or voice traffic is probably a good idea for the target I'd like to be able to hit - I don't want to do precisely that, but if my server's latency is good enough to handle that I'd be very happy.

tiffanyh 13 hours ago 0 replies      
My mind kind of explodes reading that article.

So many bells & whistles and I don't even know where to begin.

Thaxll 15 hours ago 1 reply      
"people are still copy-pasting their old sysctls.conf that theyve used to tune 2.6.18/2.6.32 kernels."


doubleorseven 14 hours ago 1 reply      
wow.This is the kind of article i'm expecting to see when I google for "Nginx best practices 2018".I am so far behind, maybe 20% of my usual setup include those recommendations.Thank you Dropbox.

If someone can point me to a thorough article like this on the lua module, I will thank her/him forever.

leetrout 14 hours ago 1 reply      
I'm surprised there was no mention of tcp_max_syn_backlog and netdev_max_backlog.

When I've previously tuned a server I have used both of those to my advantage... Another comment on here talked about this ignoring an existing load balancer so maybe those sysctls are more appropriate on an LB?

z3t4 11 hours ago 1 reply      
Would be cool with a story like, we did these adjustment so each server could handle 10% more requests. etc. This blog post seems to only cover software, you can also gain a lot of performance from hardware modding. Someone said optimizing is the root of all evil ... So first identify bottlenecks in real work-loads, no micro-benchmarking!
abc_lisper 15 hours ago 3 replies      
Just goes to show how much one should know in our field to make the machine work well for you. For somebody that can understand the article, the stuff is mostly known, but if you don't know it, the article is pretty dense.

It would be nice if someone make a docker image with all the tuning set (except the hardware)

It would have be nicer, if the author has shown what the end result of this optimization looks like, with numbers, comparing against a standard run-of-the-mill nginx setup.

rcchen 12 hours ago 0 replies      
> If you are going with AMD, EPYC has quite impressive performance.

Does this imply that Dropbox has started testing out EPYC metal?

pavs 13 hours ago 1 reply      
No mention of VPP - does it not apply to applications? or Routing/ switching?


alinspired 9 hours ago 0 replies      
Great write up for traditional/kernel tuning! I guess i'm naively waiting for dpdk-based user space solutions to appear.
cat199 13 hours ago 1 reply      
great article overall - but starts off by saying 'do not cargo cult this' and then proceeds to proscribe many 'mandates' without giving any rationale behind them..
dragonwarrior 1 hour ago 0 replies      
Very interesting
mozumder 14 hours ago 3 replies      
So how does Linux compare now with FreeBSD in terms of throughput and latency? I remember like 10 years ago Linux had issues with throughput, which is why Netflix went with FreeBSD. Are they similar now?
korzun 12 hours ago 2 replies      
You can skip all of that nonsense and run FreeBSD.
hartator 15 hours ago 1 reply      
I wish they could make a MacOSX app that doesn't use almost 100% of one core all the time.
Articles on fractals, computer graphics, mathematics, demoscene and more iquilezles.org
173 points by adamnemecek  9 hours ago   11 comments top 7
panic 7 hours ago 1 reply      
Inigo Quilez was responsible for modeling vegetation procedurally in Brave:

If you just render the geometry, its pretty, but it doesnt look lush and furry, says supervising technical director Bill Wise. We wanted Spanish moss hanging from tree limbs, and clumps and hummocks of moss. The Highlands of Scotland were like another character in the film, a living backdrop for what was going on. We had never tackled as vast an outdoor landscape, but we were able to generate it using insane procedural geometry developed by Inigo Quilez. Hes a magician. (http://www.cgw.com/Publications/CGW/2012/Volume-35-Issue-4-J..., section "Painting with Code")

He contributed some basic lessons about it to Khan Academy's "Pixar in a Box" section on Environment modeling (https://www.khanacademy.org/partner-content/pixar/environmen..., direct link to first video: https://www.youtube.com/watch?v=fuwUltMAdYQ).

Also worth checking out is "Elevated", one of the coolest 4066-byte programs you'll ever see: http://www.pouet.net/prod.php?which=52938.

andybak 1 hour ago 1 reply      
Interesting timing as I've recently become mildly obsessed with signed distance fields - it seems a much more elegant way of describing 3d forms than piles of polygons.

This page has a great cache of formulae for various primitives, patterns and blend functions: http://mercury.sexy/hg_sdf/

The boolean and domain stuff there blew my mind. It's incredible how concise the descriptions are - goes a long way to explaining some of the more magical stuff that came out of the Shaderlab/Demoscene world.

Question - are many people using SDF/Ray marching in games outside of very niche uses? (clouds etc)?

The performance benefits only seem to kick in for very specific applications but I'd like to see whole levels/environments built like this. Maybe as the current gen of GPUs drops in price and become ubiquitous it will become a more feasible approach all round.

PS If anyone has access to a Rift (or a Vive via ReVive with a bit of fiddling) try out https://github.com/jimbo00000/RiftRay

It's a lot of Shaderlab stuff in VR and some of it is truly astonishing in a headset. (I know you can run the website itself in WebVR but it's clunky as hell and performance isn't great)

ericjang 8 hours ago 1 reply      
I met Inigo at an internship when I was just starting out my CS career and he was kind and patient enough to mentor me on all sorts of cool computer graphics. A quote from "The Prestige" comes to mind:

Oh, no, sir. This wasn't built by a magician. This was built by a wizard. A man who can actually do what magicians pretend to do.

He does things that I wouldn't have dreamed possible with fragment shaders and implicit geometry. https://www.youtube.com/watch?v=yMpG6qEb8js

IQ is a wizard and really inspired my interest in programming and math.

eggy 8 hours ago 0 replies      
His YouTube tutorial where he creates the beating heart composited on live video as he explains it is a great introduction for beginners and younger people [1]. I've learned so much from his sharing of his techniques and knowledge. Thanks Inigo!

 [1] https://www.youtube.com/watch?v=aNR4n0i2ZlM

adamnemecek 8 hours ago 1 reply      
Inigo Quilez is the person behind https://shadertoy.com.
krige 4 hours ago 0 replies      
A number of articles seem to sadly be in the vein of dry "so I did a thing once and it worked". Kind of missed opportunity.
vortico 4 hours ago 0 replies      
Obligatory calling out for scrolljacking.
Bitumen balls could be a pipeline-free way to transport Alberta oil cbc.ca
33 points by gerry_shaw  3 hours ago   37 comments top 8
pjc50 1 hour ago 0 replies      
People are missing a couple of details in this: the Canadian "oil" in question is not a liquid from a well to start with, it's tar sands that have been dug up. The first step of processing is melting it to filter out the sand. This leaves you with viscous bitumen that cannot be pumped efficiently down a pipeline. So normally it's processed again to make "dilbit", diluted bitumen, in order to ship it to an oil refinery for cracking to produce actually useful petrol.
thereisnospork 1 hour ago 2 replies      
As a rule, it is far easier* to transport/handle liquids in large quantity than solids. Even ignoring the effort to convert/deconvert the oil at each end, I'm not seeing this as a step forward that will improve the bulk of crude-oil transport -- baring a more detailed analysis than what reads to me as 'pipelines bad.'

*Cheaper, safer, more efficient, requiring less maintenance, etc...

cperciva 2 hours ago 4 replies      
This is weird. Pipelines are the cheapest, most efficient, and safest way to move large amounts of oil. How is something which renders oil impossible to transport via pipeline a step forward?
tiku 1 hour ago 1 reply      
We can now make huge rube goldberg machines to transport those balls.
osrec 30 minutes ago 0 replies      
Could the oil not be put in containers and transported by rail in its original form anyway? I'm not sure if converting to pellets really makes the problem easier to solve...
gattilorenz 2 hours ago 2 replies      
Nice, although I'm wondering which company will trade efficiency (using spheres to transport a liquid means you have lots of "wasted" space) for environmental safety... without a law imposing it, at least.

Maybe the economic incentive of balls just "rolling away" (thus remaining recoverable) in the event of a pipeline/tanker/carriage leak could balance this?

TeMPOraL 1 hour ago 1 reply      
Two questions I'm wondering about:

- How solid are those pebbles? I assume they're not like soft blobs that can easily split and merge together? But then how much abuse they can take? E.g. if they crack easily, you can't really stack them together very high.

- The obvious one - are the pebbles flammable?

Boothroid 2 hours ago 3 replies      
Anything that facilitates increased use of tar sands hydrocarbons is terrible news for the environment. We should ban this miserable trade.
AssemblyScript: A Subset of TypeScript That Compiles to WebAssembly github.com
161 points by indescions_2017  10 hours ago   59 comments top 12
Fifer82 3 hours ago 2 replies      
I love TypeScript! Forgive my ignorance but what purpose does WebAssembly have in this context? I assumed that stuff like ASM was really for other non-native applications. Say for example a C++ game which then can run in the browser or stuff like that.

So as a TypeScript guy with AssemblyScript at my fingertips, what doors does that open for me?

I occasionally have to make HTML5 games in Canvas. Is this the kind of path where WebAssembly could be beneficial?

One day will there simply be an end build step to turn everything into web assembly or is it never intended for use with the DOM?

vosper 10 hours ago 6 replies      
I get the attraction for compiling a non-web language (say Python) so that you can write Python and have it run in the browser, but since Typescript already compiles to JS I don't really get it. What am I missing? Can it be faster, or do more/different things?
mhd 3 hours ago 0 replies      
I'm still hoping that I'll end up on the "Hejlsberg roundabout", i.e. that this whole WebAssembly thingamajig will get me Turbo Pascal productivity again. (I know, we progressed since then in regular IDE land, but for the web, this would be a step forward)
freechessclub 9 hours ago 3 replies      
...where "A Subset of TypeScript" = JavaScript?

If I am not using this, what are the other languages I can use today that compile down to WASM?

dakom 2 hours ago 0 replies      
This looks really cool to me for a learning environment. E.g. write some typescript, see how it compiles to s-expressions, etc.
redgetan 8 hours ago 1 reply      
Just noticed that quite a few browsers support WebAssembly already (http://caniuse.com/#search=webassembly) . I wonder whether compiling Unity game to WebAssembly is better (more stable) than Unity to ASM.js
bernadus_edwin 2 hours ago 0 replies      
Maybe someday xamarin can use this compiler
aussieguy123 8 hours ago 2 replies      
I thought WebAssembly didn't have a garbage collector yet.
themihai 8 hours ago 1 reply      
I assume it there is no DOM access, right?
eyerow 3 hours ago 0 replies      
You should reconsider the name "AssemblyScript" if you want it to catch on.
camus2 9 hours ago 3 replies      
So if I get things right :

- ASM.js : basically assembly with a subset of javascript syntax

- WebAssembly : assembly with a different format which doesn't require parsing javascript.

- languages built on top of WebAssembly : C/C++ like languages with explicit memory management/ no garbage collection.

- assemblyscript : C with a typescript syntax ?

lomnakkus 8 hours ago 0 replies      
I can't but think that this is such a "Hype Train" thing.
Climate Engineers Sucking Carbon Dioxide from the Atmosphere bloomberg.com
32 points by petethomas  4 hours ago   26 comments top 8
Tade0 19 minutes ago 0 replies      
There's one significant thing they're achieving here: putting a price on CO2 in the atmosphere.

Some of the resistance to action on climate change comes from the notion that the cost is unknown, which in turn stems from the fact, that to date there was no scalable way to undo the emissions.

Now that a way has been demonstrated emitters can choose whether they want to reduce their emissions or undo them(assuming the right legislation forcing them to do so is in place). I believe most will go with the former option until it stops being economical.

enugu 2 minutes ago 0 replies      
Wouldnt it be cheaper to first capture CO_2 from the output of coal plants and other industries where it is in concentrated form? Also, economies of scale. Once we bring down the centralized emissions, we would still have to deal renewables can help) and the meat industry.
intended 2 hours ago 4 replies      
I find that the idea of "climate engineers" and "geo engineering" has markers which will make it appeal to the same people currently peddling climate denial.

My running bet/prediction is that

1) the imbroglio in America on climate denial will continue,

2) most of the world will feel happy that they are doing less than it should, but more than the stat

3) Eventually someone will sell the idea of !!GEO ENGINEERING!! to the same people they feed all sorts of Denialism.

This will sell well because it has "WE'LL MAKE THE WORLD GREAT AGAIN! WITH HARD WORK AND GUMPTION", and will of course be subsidized by the Government so it will have "JOB CREATION!" written all over it. I could write a satirical Ad for it today, and be assured that its twin will play for real in 20 years.

While this comes across as deeply cynical (it isn't cynical enough), it's based on the debacle that is Environmental Protection, from before I started reading news papers in the 1980s.

The news today may be dominated by what America is doing, but lets not forget that it was and still is a MASSIVE uphill struggle to get people to care for decades.

And I am ignoring the effects of funding into Climate Denialism , and FUD campaigns.

In the end people are going to always choose themselves over the environment.

In a world where clean coal can be marketed, "Climate Engineering" sounds like an entire industry waiting to be born.

sorry, I wish I had something more optimistic to say.

legulere 25 minutes ago 0 replies      
> Working around the clock, each capture plant can vacuum about 50 tons of CO from the atmosphere a year, Wurzbacher says.

For a comparison the per capita CO emissions per capita in the US are 20 tons. With an average household size of 2.58 you would need one of those plants + CO storage facilities per household.

dalbasal 1 hour ago 2 replies      
These sorts of ideas are in a weird place. To many who are actively seeking better performance in our carbon reduction efforts, ideas like this could be seen as a threat to those efforts.

ATM, the debate is (1)"is the carbon-climate problem real?" and (2) "If so, how do we reduce carbon emissions?". "Climate Engineering" would complicate 2 a lot, probably splitting political support.

Politically, they open the door to continued or increased carbon emmissions today. They let us avoid addressing the "root cause." This is all regardless of timelines or progress. Just the existence of the idea in public debate might be enough to swing the opinion balance.

I imagine there are some who would ideally like to keep geoengineering as a quite Plan B so as not to disturb current efforts, but that's not really possible.

Overall, I think these ideas (if actually viable) will inevitably enter the discourse in the long term. We are already engineering in some sense. We have models and targets for both carbon and temperature. IE, we want to take control (to a small extent) of the climate. That's engineering, and engineers always look for more tools eventually.

myrloc 4 hours ago 2 replies      
I'm a fan of the Climeworks concept, but the article and the Climeworks website fail to address the obvious question: what's the net sum of CO2 collected to CO2 created in creating the energy needed to power their plants?
rdl 1 hour ago 1 reply      
I wonder how much use CO2 extracted from the air has. Greenhouse CO2 supplementation sounds interesting but I assume there are cheaper ways to produce CO2 already.
ribfeast 2 hours ago 0 replies      
> quickly bonded over their shared loves for mountain climbing and beer

Didn't realize that was so unusual in Switzerland.

Missed optimizations in C compilers github.com
144 points by ingve  11 hours ago   33 comments top 6
clarry 55 minutes ago 1 reply      
GCC can't optimize out the mask (which is required to avoid UB if n may be equal or greater than 64):

 #include <stdint.h> uint64_t rol(uint64_t n, uint64_t val) { n &= 63; return (val << n) | (val >> 64-n); }
Compiles to

 rol(unsigned long, unsigned long): mov rcx, rdi mov rax, rsi and ecx, 63 rol rax, cl ret
Clang and ICC do get it right.

nanolith 7 hours ago 1 reply      
These results don't surprise me.

A lot of these suboptimal examples come down to the complexity of the optimization problem. Compilers tend to use heuristics to come up with "generally good enough" solutions in the optimizers instead of using a longer and computationally more expensive foray into the solution space. Register allocation is a prime example. This is an NP-Hard problem. Plenty of heuristics exist for finding "generally good enough" solutions, but without exhausting the search space, it typically isn't possible to select the most optimal solution, or even to determine whether a given solution is optimal. Couple this with the tight execution times demanded for compilers, and issues like these become pretty common.

Even missed strength reduction opportunities, such as eliminating unneeded spills or initialization, can come down to poor heuristics. It's possible to write better optimizer code, but this can come at the cost of execution time for the compiler. Hence, faster is often chosen over better.

In first-tier platforms like ix86 and x86_64, enough examples and eyes have tweaked many of the heuristics so that "generally good enough" covers a pretty wide area. As someone who writes plenty of firmware, I can tell you that it's still pretty common to have to hand-optimize machine code in tight areas in order to get the best trade-off between size, speed, and specific timing requirements. A good firmware engineer knows when to trust the compiler and when not to. Some of this comes down to profiling, and some comes down to constraints and experience.

Then, there are areas in which compilers typically rarely produce better code than humans. Crypto is one example. Crypto code written in languages like C can break in subtle ways, from opening timing oracles and other side-channel attacks to sometimes getting the wrong result when assumptions made by the developer and the optimizer are at odds. In these cases, hand-written assembler -- even in first tier platforms -- tends to be both faster and safer, if the developer knows what he/she is doing.

pkaye 8 hours ago 2 replies      
Among other things, I seen lots of inefficiencies with bitfields. I tend to use them a lot in hardware register access and packing of data in embedded development. Imagine a bitfield that fits into one word and setting each of the bitfields to a constant. A good compiler should be able to set all the values with one load operations. Some compilers would break this into many separate loads. I think the ARM compilers were worse at this while Clang would optimize it much better. Many times had to forgo bitfields and use macro definitions and masking to get the best code generation at the cost of readability.
haberman 10 hours ago 1 reply      
I haven't worked on ARM much, but I am surprised how often I find sub-optimal code like coming from production compilers. Here's one that I found several years ago in GCC/x86-64. Took a few years to fix:


Animats 2 hours ago 2 replies      

 char fn1(float p1) { return (char) p1; }
That's undefined behavior. Don't complain about performance.

chaboud 3 hours ago 1 reply      
While there may be some compiler misses, I started doubting the whole thing when I hit:

"Missed simplification of multiplication by integer-valued floating-point constant

Variant of the above code with the constant changed slightly:

int N;int fn5(int p1, int p2) { int a = p2; if (N) a = 10.0; return a;}GCC converts a to double and back as above, but the result must be the same as simply multiplying by the integer 10. Clang realizes this and generates an integer multiply, removing all floating-point operations."

A double or float literal multiply followed by an integer conversion is nowhere near the same as an integer literal multiply. If the coder wanted = 10 (or even = 10.0f), that was available. If = 10.0 was written, it should generally be compiled that way unless --superfast-wreck-floating-point was turned on...

Voynich manuscript: the solution? the-tls.co.uk
228 points by noahth  14 hours ago   71 comments top 24
nkurz 12 hours ago 4 replies      
Other than declaring that the solution is obvious to a self-declared expert such as himself, the author (Nicholas Gibbs) doesn't appear to give any proof of his theory.

So far I can can find online, this piece is the only thing he has ever published about the Voynich manuscript:https://duckduckgo.com/?q=%22nicholas+gibbs%22+voynich

Who is Nicholas Gibbs? Does anyone besides Nicholas Gibbs trust his opinion on these matters? And how did he convince the TLS to publish this drivel?

(to avoid being entirely negative, here's a link to a blog that shows what some better Voynich research looks like: https://stephenbax.net)

dmbaggett 12 hours ago 7 replies      
There are many crank analyses of the Voynich manuscript floating around out there. The only thing I've seen that has any believability (I'm a former linguist) is this:



tl;dr: it's probably real writing, likely related to Roma/Syriac

kitanata 4 hours ago 0 replies      
Like, is everyone ignoring the picture at the top of the article? That looks like a pretty believable direct translation of the ligatures to me. I know its not the whole thing, but it is plausibly consistent.

That image is titled p16_Gibbs1.jpg. To me that hints that the author is serious and is planning to release a detailed paper.

His final statement at the end of the article is really bold. "Not only is the manuscript incomplete, but its folios are in the wrong order and all for the want of an index."

Perhaps the author is going to provide the index, and the correct order for the folios while providing what he believes to be the missing pieces from other texts from that time period?

This article looks like a teaser to me for something significant. Let's hope anyway.

dwringer 12 hours ago 0 replies      
I see comments suggesting that it wouldn't be worth the effort to translate this based on the author's hypotheses, but there has been a substantial community around trying to do just that for a long time. FWIW The NSA appears to have published a book around 1978, The Voynich Manuscript: An Elegant Enigma [1], and a couple of the things from this article jump out at me after having read that book that raise red flags about the article's interpretation. The idea that there were multiple artists is far from being universally accepted, and experts who have studied this in the past have not been able to conclusively state that there were more than one or two authors or artists, although the possibility does remain open. Secondly, the suggestion that each glyph represents a full word in latin has also been studied - see the link for more information, but the frequency distributions and vocabulary size do not seem to make sense if that is the case (someone please correct me if I'm wrong).

In all I am surprised more progress has not been made since the advent of the internet and its crowd-sourcing potential. There is definitely no shortage of interpretations all over the internet, and in headlines from time to time. The last one I recall from a couple of months ago suggested that there was a specific Jewish birthing practice being illustrated on one of the pages that suggested a certain origin of the text. [2]



lisper 11 hours ago 2 replies      
The idea that the Voynich manuscript is a medical text seems plausible, but the theory that it uses a logographic representation (one symbol per word) rather than an alphabetic or even a syllabaric (one symbol per syllable) one seems less likely to me. A cursory examination of the manuscript (http://www.voynich.nu/folios.html) reveals that the lexicography looks much more like an alphabetic encoding than a logographic one like Chinese. The symbols are collected into word-like groups separated by white space. Also, it appears that there are too many repeated symbols and insufficiently many distinct symbols for a logographic language.
fusiongyro 13 hours ago 3 replies      
Where's the actual solution? I feel like I'm missing something because what I see is some plausible commentary about it and some interesting discussion of Latin and ligatures but where's the actual decoding of the writing?
eponeponepon 13 hours ago 0 replies      
Fascinating, if accurate - and I rather hope it is. If it is, it would make the whole thing a rather instructive example of how siloing knowledge can hide truth; the author's domain knowledge has given him the tools to identify the manuscript, but it's generally been in the domain of conspiracists and 'hidden knowledge of the ancients' types.

It would be good to see a thorough study of it to test the author's hypothesis, of course.

Havoc 13 hours ago 4 replies      
Rather convenient "solution".

The solution is the heading and index...which are missing.

Author might be right, but that is essentially an un-provable statement and doesn't really amount to a solution. But rather a statement that it can't be solved.

klunger 2 hours ago 0 replies      
This guy started with a hypothesis and then set out to prove it by looking for evidence. He then said the evidence was missing (hacked off) but his hypothesis was still true. Really, it is not very persuasive.
mordae 12 hours ago 0 replies      
As Czech myself, I prefer to believe the theory of several clever guys tricking someone important to buy a nonsense book of secrets, splitting the spoils, having a good laugh. Resonates well... Heidrich called us the Laughing beasts for a reason.
groby_b 10 hours ago 0 replies      
"I have a brilliant proof, but not only is the margin too small to hold it, it has been hacked off".
mntmn 11 hours ago 1 reply      
What if it is just Lorem Ipsum by someone who could draw but not actually write?
questerzen 5 hours ago 0 replies      
Other people have also suggested Latin as the most likely base language. A better discussion is provided here:http://www.science20.com/patrick_lockerby/patterns_of_latin_...
computator 11 hours ago 1 reply      
A sample translation is the key thing I wanted to read in this article, and all they gave was an illegible low-resolution snippet without an English translation -- very annoying.

As best as I can read, the purported Latin translation in the image at the top of the article says:

Folia de oz et en de aqua et de radicts de aromaticus ana 3 de seminis ana 2 et de radicis semenis ana 1 etium abonenticus confundo. Folia et cum folia et confundo etiam de eius decocole adigo aromaticus decocque de decoctio adigo aromaticus et confundo et de radicis seminis ana 3.

Feeding the above to Google Translate gives:

The leaves of Oz and added to the water and the aromatic radicts semen Ana ana 3 2 seed and the roots ana 1 etium abonenticus the mix. The leaves, when the leaves are decocole adigo and the mix of the aromatic decocque of the cooking adigo an aromatic mix of roots and seeds Ana 3.

Yes, I realize that the author's translation might be completely mistaken, but I'm curious to read what he thinks it says. If someone can make out the words better, please do so.

Nursie 11 hours ago 1 reply      
I see a theory, not a translation, am I alone in this?
emeraldd 12 hours ago 1 reply      
So the whole thing is written in a form of shorthand and the core index/naming that define what the individual pieces are is missing. I wonder if this wasn't meant as a "production" manuscript but as a reference document for the replication of a larger work?
bjackman 10 hours ago 0 replies      
Don't have time to read this properly but have been reading about the VM lately, most interesting researchers I've found are Stephen Bax[1] and this YouTuber[2]

[1] https://stephenbax.net/?cat=5

[2] https://www.youtube.com/channel/UC-sW5dOlDxxu0EgdNn2pMaQ

Some really interesting analyses in there.

MikeGale 12 hours ago 0 replies      
A fascinating analysis, half done. If the whole text were decoded and the indices rebuilt, would be more convincing.
jumpkickhit 10 hours ago 0 replies      
Interesting. Noticed youtube pushing Voynich in suggestions out of the blue last week, now there's an article posted up here days later.
sabujp 10 hours ago 0 replies      

 By now, it was more or less clear what the Voynich manuscript is: a reference book of selected remedies lifted from the standard treatises of the medieval period, an instruction manual for the health and well being of the more well to do women in society, which was quite possibly tailored to a single individual.

abakker 5 hours ago 0 replies      
My theory: it is music, not writing. I wonder if that is even possible?
jv22222 11 hours ago 0 replies      
There is a good general synopsis of the Voynich manuscript on wikipedia:


Interesting stuff!

chx 3 hours ago 0 replies      
Betteridge's law of headlines is one name for an adage that states: "Any headline that ends in a question mark can be answered by the word no."
forgotmypw 9 hours ago 0 replies      
Site no worky without javascript, could someone paste a copy, please?
Facebook recruiting and Unix systems imgur.com
761 points by abhisuri97  10 hours ago   341 comments top 19
aspyrx 7 hours ago 32 replies      
Hey folks, I'm the student writing the emails in the post here. Thanks to everyone for their criticisms. While I was initially kind of shocked by the recruiter's response, I've had a lot of time to think about it today and have realized that I was being pretty damn condescending and spoke out of line without regards to the context. It's been a hard lesson learned. I honestly regret the whole exchange, and posting it online was inappropriate as well. I briefly debated deleting the image, but decided to leave it up for sake of posterity and accountability.

Also, just to be clear, I do not (and never did) hold any hard feelings towards the recruiter; in fact, it was very kind of them to point out why I was not qualified in the first place. This has been probably the most reflective of how I let my ego get the best of me at times, and I hope it might serve as a warning to those who might be tempted to do the same "devsplaining" in similar situations.

Please let me know if you have any other criticisms beyond the ones already voiced in this thread. I'm reading through the comments here as I can, and it's been a lot of good advice. Thanks again.

type0 10 hours ago 5 replies      
Recruiter: We're looking for someone to fix the pipes.

You: Yes, I'm a qualified plumber and can do the job.

Recruiter: Sure, but can you fix our pipes?

You: Off course, that is is what I was trained to do.

Recruiter: You keep saying you're a plumber but we need someone to fix our pipes

You: I can do it.

Recruiter: we need someone who have worked with pipes.

You: I have worked with those.

Recruiter: Sure, but we need someone to fix our pipes.


jdavis703 10 hours ago 11 replies      
This is not the right attitude to bring in to the workplace. You'll have a range of people from executives, to clients to co-workers in other departments who don't know what UNIX, APIs and POSIX are. If you can't communicate technical matters nicely then you have not yet developed the right attitude for a professional working environment. It's the same thing with doctors when they create metaphors to explain complicated problems. You need to speak in the layperson's terminology, and if all they know is UNIX then call it UNIX and leave it at that.
tw1010 8 hours ago 5 replies      
In situations like these you really aught to ask yourself what your end-goal with being elitist like that is really trying to serve. In engineering situations usually the purpose is some form of disguised signaling. This is the purpose of most sentences that start with "technically". But when you're talking to a recruiter that kind of move is unlikely to be effective. It's unlikely to affect your application positively, and the only effect you can expect is what happened in this interaction, namely rejection.Be honest about why you act the way you do, and be rational about your decisions.
skndr 8 hours ago 0 replies      
Reminds me of this: If Carpenters Were Hired Like Programmershttp://www.jasonbock.net/jb/News/Item/7c334037d1a9437d9fa650...


Interviewer: First of all, we're working in a subdivision building a lot of brown houses. Have you built a lot of brown houses before?

Carpenter: Well, I'm a carpenter, so I build houses, and people pretty much paint them the way they want.

Interviewer: Yes, I understand that, but can you give me an idea of how much experience you have with brown? Roughly.

Carpenter: Gosh, I really don't know. Once they're built I don't care what color they get painted. Maybe six months?

TheMagicHorsey 3 hours ago 1 reply      
I had a similar experience with a recruiter at Google ... only worse. Recruiters are the gatekeepers. In my case I was waiting for an answer on my interview at Google for 6 months ... just radio silence with no reply.

If you are lucky, maybe you have some friends at Facebook that can intervene on your behalf. If not, there's other companies.

I had friends at Google to help me get an answer. But I decided I never wanted to work for Google based on how callously they treated me during the interview process. It became clear to me that I was not a high priority person to them ... just a fungible commodity. This is true, but nobody likes to be shown the truth of their value like that.

I knew other people with other talents that were treated really well by Google. My skillset was not in that high demand ... or there were plenty of other candidates. But I felt like Google did not need to treat me like garbage.

After that I went and joined a startup. Quite happy now. We stole a few engineers from Google even :)

I always make sure we get back to our interview candidates as quickly as possible. I won't let us turn into a callous Google.

cthalupa 4 hours ago 0 replies      
While both parties here could learn how to communicate better, there's an important distinction:

The student is representing only himself. The recruiter is representing the company.

When a recruiter mishandles a situation so massively as was the case here, it puts Facebook in a poor light. Obviously, Facebook's engineering teams are well aware of what all of this technology is, but it is distasteful to see this complete lack of understanding and know that I might have to deal with it if I were to work with one of their recruiters.

I'm not saying the industry in general is better than this, but it would have taken the recruiter all of 30 seconds to draft an email to the hiring manager and ask for clarification - "This guy is really insistent on how he has POSIX and Linux experience and that this will be okay. What do?"

Also, putting Unix on the required skills for an intern position? What did they think was going to happen?

kstenerud 5 hours ago 1 reply      
Take this as a lesson in people handling. As a rule when dealing with business situations:

- Keep messages simple. Less information to process is better.

- If the response seems strange, start by assuming miscommunication and misunderstanding. Do not respond with more complicated information.

- When in doubt, say what they are expecting to hear and sort it out later when you talk voice or in person.

A more fruitful exchange would be:

Recruiter: We require having UNIX experience. If you have it could you update your resume and resend?

Candidate: Resending with UNIX experience written in.

Recruiter: Thanks! We'll be contacting you shortly.

betadreamer 2 hours ago 0 replies      
This happens all the time. Recruiters are Sales people. They don't want to learn technical things. It's not in their interest to know what Unix is. They just want to validate whether you are good and if so, sale you the position.

The proper response here is:"I have experience in Linux and Unix. Updated resume is attached." Done.

You need to treat them like he's your old uncle. Use minimal technical words, repeat what they ask/say and most importantly respect them.

foo101 7 hours ago 3 replies      
Here are the facts about Unix.

* The name "UNIX" or "Unix" is trademarked.

* AIX, Solaris and HP-UX are certified Unix systems. These are bonafide Unix systems and there indeed are software developers today who work with these Unix systems.[2]

* The set of Unix-like systems is a superset of the set of Unix systems. The set of Unix-like systems include systems like Linux, FreeBSD, etc. which the set of certified Unix systems do not.

[1]: https://archive.org/details/bstj57-6-1905

[2]: http://www.makeuseof.com/tag/linux-vs-unix-crucial-differenc...

kev009 8 hours ago 1 reply      
Sort of off topic but I am just kind of wondering if there is a safety net if I ever burn out, and how much money I am potentially not earning :o)...

Any chance someone on here that was deeply technical transitioned into recruiting? Will you spill the beans like compensation ranges, per head bonus/commission, and satisfaction?

As an engineering manager I've done all my own recruiting and have been recognized by my management chain for doing a stand up job at it.

I know several socialites from High School that are not or barely technical that outwardly seem to be earning a lot of money doing recruiting or contract agency talent management. It seems like way less hours and stress than I've put in to become a systems expert. Maybe that wont last forever with economic waves, but then again neither do a lot of tech jobs.

swalsh 8 hours ago 0 replies      
Just because you're right, doesn't mean you've won. The recruiter is a gatekeeper, just say the thing you need to say to get past them... move on to the next level. The recruiter asks for Unix/Linux and yours says Unix-Like, get over yourself and change your resume. They are literally telling you the password.
kichik 9 hours ago 1 reply      
In my home country we say sometimes it's better being smart than right. The recruiter is doing their job and antagonizing them, no matter how right you are, is not the smart move.
acmustudent 8 hours ago 3 replies      
This is the official CMU facebook recruiter. They have done this to multiple people I know, each time they tried to explain and yet this keeps happening.
brad0 7 hours ago 1 reply      
Other comments say this guy should learn how to communicate better. I agree with that.

Knowing the person you're talking to helps greatly.

The average recruiter has a high school certificate and that's it. They're hired to do largely manual work comparing skills on resumes to skills on job positions.

Now that you know how they work you should ask yourself what's your goal? Is it to get that internship at Facebook? If so then how can I write my resume and cover letter to help me get the recruiters attention? Put the skills from the job listing on your damn resume.

Personally I'd dislike working with this guy. I can tell he's a smart guy but he's misdirecting his intelligence.

noncoml 9 hours ago 2 replies      
True story, I once got an email from a recruiter asking for someone with experience in C, C+ and C++.

No idea if it was a typo for C# or the recruiter thought, we have C and C++, why not add C+ in there to increase our hits.

I found it amusing, but didn't start an argument..

wallabie 5 hours ago 1 reply      
This sounds so much like those ads that look for "8 years of Swift development experience" despite the fact that Swift only came out 3 or 4 years ago. The fact that a recruiter can 'recruit' for a company like Facebook without having enough knowledge in the area that they're recruiting for is not only embarassing for FB but for the recruiting profession in general.
tzs 7 hours ago 0 replies      
> Beyond software developers who have programmed in the 1970s, most people do not have experience with a true UNIX OS, and I would find it hard to believe that such outdated technologies are underpinning Facebook's advanced innovations

Every OS X from 10.5 on except 10.7 has been certified under Version 3 of the Single UNIX Specification, and thus is officially considered to be UNIX.

foobaw 9 hours ago 1 reply      
This is a tricky exchange. Both sides could've done better. To be honest though, a company like Facebook gets so many resumes that these mistakes are inevitable. Not saying it's acceptable, but it's just the nature of recruiting.
Show HN: Page.REST An API to fetch details from a web page as JSON page.rest
49 points by laktek  5 hours ago   15 comments top 7
xytop 49 minutes ago 1 reply      
keyle 3 minutes ago 0 replies      
I did something similar a loooong time ago. Granted not as sexy.


ganessh 3 hours ago 2 replies      
If I have to know the elements' selectors, why should I prefer this service over using a HTML parser?
kowdermeister 1 hour ago 1 reply      
I would make the 5$ price and token validity much larger, like "4rem" or something, I was looking at the CC input field and thinking "seriously? how much will you charge?"
geetfun 2 hours ago 1 reply      
Looks interesting. I wonder what kind of market this app might serve. For larger apps, I would worry about support. 5 dollars per year tells me that the developer is doing this as a hobby. For small side projects, I can see tinkerers building this themselves.
squiggy22 2 hours ago 0 replies      
cinooo 2 hours ago 1 reply      
Someone could really abuse this service, I don't see any mention of API limits.
Show HN: Make your web page dance with Rythm.js okazari.github.io
150 points by Okazari  12 hours ago   56 comments top 18
CM30 42 minutes ago 0 replies      
This would have been amazing in the days of Myspace or Geocities. It fits the aesthetic of that era very well.

Still, it's an interesting library, and has its uses on certain niche sites (or for easter eggs). Just don't use it by default on a business page or anything you want people to take seriously.

crispinb 10 hours ago 3 replies      
It's fun for a demo. But if I ever find it used on a web page I visit, I shall track the owner down and place them in isolation for life.
jim_d 11 hours ago 1 reply      
This made me smile. Good work.

ps: a nice big demo button beside 'View on Github' and 'Release Notes' would be a nice tweak :)

fowl2 6 hours ago 1 reply      
was disappointed to see the music was just a mp3, not synthesised ;P

edit: the track appears to be "Kinetic (The Crystal Method vs Dada Life)" <https://www.youtube.com/watch?v=uHJyAZtRrOY>

instakill 2 hours ago 0 replies      
Awesome stuff. You know you've made something great when most of the HN comments about your project are positive :)
prawn 8 hours ago 0 replies      
This will be everywhere on April Fools 2018.
JuggaloJohnnie 7 hours ago 0 replies      
We have MARQUE tags back!jk .. but feels like a techno-boogie version of it for the web 2.0 generation! Freegen rocks.
gorg75 3 hours ago 2 replies      
First I thought the script just made things bounce at intervals, but then I noticed the different boxes for base, mid and high range sounds actually follow the changes in the music. How do they do this? - I guess I should read the source code :)
magamind 10 hours ago 4 replies      
Spelling issue there with 'rhythm'.
magic_beans 7 hours ago 3 replies      
None of it works on mobile.
blairanderson 9 hours ago 0 replies      
I don't know what it is but I like it!!!
zaf 2 hours ago 0 replies      
Not working on Desktop Safari 10.1.2
revicon 11 hours ago 3 replies      
Is this supposed to work on mobile?
kixpanganiban 9 hours ago 2 replies      
Fun idea, but as soon as I hit `Start Demo` my CPU usage spiked up. Good way to exhaust a laptop/mobile device's battery ;)
anotheryou 11 hours ago 1 reply      
somehow not working here

edit: didn't see the "start demo" button...

vuldin 8 hours ago 1 reply      
For some reason I love this.
throwaway2016a 8 hours ago 1 reply      
Needs an option to enable only when the Konami Code is entered.
tribby 8 hours ago 1 reply      
It's missing variable fonts with animated font-variation-settings

...I'll see myself to the door :)

       cached 7 September 2017 10:02:01 GMT