hacker news with inline top comments    .. more ..    12 Sep 2017 News
home   ask   best   2 years ago   
Sedentary Behavior and Mortality in U.S. Middle-Aged and Older Adults annals.org
21 points by happy-go-lucky  32 minutes ago   3 comments top 3
fragsworth 0 minutes ago 0 replies      
> Evaluation of their joint association showed that participants classified as high for both sedentary characteristics (high sedentary time [12.5 h/d] and high bout duration [10 min/bout]) had the greatest risk for death.

Is sleep included in the 12.5 hours per day?

Even if it isn't, I and everyone I know sits for at least that long each day. Definitely for more than 10 minutes at a time. If we need to interrupt our sitting bouts every 7.5 minutes in order to be healthy, I don't know how the hell to do this.

overcast 2 minutes ago 0 replies      
This should be obvious, however people still refuse to get up, and just walk around every hour for a minute. I spend a silly amount of time behind a computer, but I'll do 10-20 pushups / jumping jacks, every hour or so. After hours, I'll bang out six miles on the hiking trails.

Get up, go get a drink, do a lap around the office. Get a stand/sit desk. Do some simple exercises. Pushups can be done anywhere, and will alleviate lower back pain.

solaxun 1 minute ago 0 replies      
In other news - water is wet.
Dyslexia geon.github.io
302 points by seonirav  3 hours ago   112 comments top 34
osteele 1 hour ago 2 replies      
The reading task involves a number of subtasks. If any of these don't work well, it's dyslexia. Different dyslexics have different symptoms and experiences depending on what part of the pipeline is affected how. I'm not surprised that a web page designed to simulate one person's experience doesn't match others'.

For an accessible but informed intro-level text, I recommend _Psychology of Reading_, by Rayner, Pollatsek, Ashby, and Clifton. (I took intro grad-level cognitive psychology from Rayner and Pollatsek.) One anecdote I remember from the first edition involved a subject who couldn't perform left-to-right saccades; she was dyslexic in English, but wouldn't have been in Hebrew or Arabic.

jasonkester 1 hour ago 8 replies      
That's not what it's like for me.

I simply can't tell the difference between symbols that have been rotated or reversed. So I made it through an engineering degree with "the alligator eats the bigger number" mnemonics for > and <.

The first thing I do when I get a new dev machine is take a pen and write "\r\n" up above the delete key. I've never gotten the slashes in the right direction on that key sequence from memory even though I type them several times a day (I just had to edit my post after looking it up right now and realizing I'd typed it wrong on my new laptop).

Anything that can be reversed, I reverse roughly 50% of the time. Because to my brain, they're interchangeable.

stephengillie 3 hours ago 6 replies      
This is close to how it feels when I get a "migraine flash". 2 differences - letters you're not looking at directly (a line or 2 down the page) don't move or jump.

Often, key letters will just completely disappear. In their place is...a kind of grey blank that your mind jumps over. You could swear there's something there, and you can see it when you move your eyes quickly across the word, or out of the corner of your eye when you read the previous or next word, but it disappears when you steady your eyes on that word.

Gusbenz 2 hours ago 3 replies      
I have Dyslexia. It's not like this bullshit. I think this kind of crap propagates more misunderstanding.
asadjb 3 hours ago 3 replies      
While I agree with the other comments that it wasn't impossible to read these letters, I could feel a real strain on my eyes and thought process while reading the page.

I had to "concentrate" in-order to understand each letter, something I don't have to do with normal text. I can't imagine how difficult it would be to constantly have to read everything like that.

Also, while most words were easy to make out, the ones that I don't use in everyday life; like "Typoglycemia" were impossible to figure out. I had to check what it was linked to.

ergothus 28 minutes ago 2 replies      
As far as I know, I don't have dyslexia. I've always been a voracious reader and haven't had the challenges one normally associates with the label.

But now and then - maybe once a month, maybe a few days in a row, some normal, easy, word will look _wrong_.

The letters don't "jump around", but it looks wrong the way a missplelled word does...I end up staring at it and trying to imagine how it SHOULD be spelled. That'd be less weird for me if it was a word with lots of typical english weirdness, like "necessary", but this happens on really _simple_ words, and usually just one word at any moment. Then later, that particular word stops doing it.

In recent history I can recall this happening with: "tree", "the", "matter", and (ironically) "simple", but I've never noticed any pattern to which words do it, and these are words that only do this for a few minutes or hours, then stop. "the" just looked as wrong as "teh" normally looks wrong, and every instance on the page looks like a glaring error until it subsides.

Does anyone know what does this? It's not a notable problem for me - because these are simple words that I have a lot of familiarity with I can just logically override the emotional component, but it still weirds me out. What else can my brain do this with, making something normal and mundane wrong and alien for a brief period?

FWIW, English is my native language.

bArray 1 hour ago 1 reply      
I assume I'm Dyslexic (never got tested for various reasons), but I experience what I have in a different way to this. When reading sentences I sometimes read the words in the wrong order, completely changing the meaning of the sentence at times.

The way to describe my experience is that when you read sentences, you are sometimes surprised by what you read because it seems wrong, re-read it and find out that's not what it said at all. The correction I might possibly make on this site would be to have webcam input and change only when your eyes are not looking at something and to make the change more subtle so you're not aware of it in your peripheral vision.

Day-to-day (not big) issues are:

* Having to re-read paragraphs because I read it wrong and therefore failed to understand it.* Coming unstuck in a point I'm making because I failed to read the text correctly.* Some fonts I really cannot read at any speed - basically if it differs too much from very standard computer fonts.* Given up writing lowercase in my own handwriting because I cannot easily read it - the workaround for me was to write completely in capitals.* Generally wanting to avoid reading because of the above issues.


* Able to spot mistakes in large bodies of text really quickly, but equally could be a result of just being a programmer.* Skim reading is easier because I have gotten used to getting words in the wrong order anyway, which almost the same as missing words.

Also, I find it funny that one test for whether you are dreaming is reading sentences in your dream doesn't make sense - I get this anyway :) Proof we are living in the matrix? ;)

Interested to hear if anybody else also experiences this and can even enlighten me a bit.

neom 14 minutes ago 0 replies      
For me the "movement" part is basically my mind trying to process 2D object in 3D space. They "jump" because somewhere back there, my mind is trying to show me what the glyph would look like if it was in front of me floating. This is why b and d are hard, etc. Sufficed to say, mine is nothing like this website.
yladiz 27 minutes ago 0 replies      
Seeing this helped me gain a real appreciation for the effects of dyslexia that being told how it is hasn't, even if it's not truly accurate to how everyone is affected by it. Seeing this has made me curious about how dyslexia affects non-English/Latin alphabet readers too, e.g. Chinese, Korean, Japanese, Arabic, Thai. Is there someone on here that has dyslexia and reads/writes in these scripts that can explain how it affects them? For example, in Korean do the pieces of each individual phoneme move around in ways that don't make sense, e.g. normally is but the and switch places? Or in Chinese, do the strokes within each character move around, or do the characters simply move positions within the phrase?
kuwze 1 hour ago 2 replies      
I have no clue if this works (I have not been diagnosed as dyslexic) but it certainly makes things more readable for me: https://opendyslexic.org/
_jn 11 minutes ago 0 replies      
I'd be interested in seeing is how syllable splits (for example, with middots like this) affect reading comprehension for dyslexics.
dnprock 17 minutes ago 0 replies      
I've got this book reading app with AI assistant. I wonder if it'll help with dyslexia. Would love to hear from others.


memsom 39 minutes ago 0 replies      
I'm mildly dyslexic. I sometimes read words incorrectly - so for example a local hill is called "Butser", but I frequently will read "Buster" as I drive past. So I find I read slowly aloud, because I need to process the text before I can speak it. It doesn't happen frequently anymore, but I do still get it. It's amazingly frustrating.

Weirdest thing - I can read that website pretty easily. I can see the words. I think it's because I look for more markers than just the word shape when I read. I don't know. How do others find it?

moopling 30 minutes ago 0 replies      
Pretty cool to see that the the scripts are no exception. If you look at the script tags within the inspect the code is also being reordered
konart 3 hours ago 5 replies      
Had no problems reading this even as a non-native speaker. This is similar to the case when you mess around with "inner" letters of the word, but leave first and the last ones in their places (I've read about this experiment 10 years ago maybe). While the word changes - you still able to read it in text (other words are transformed too in this case obviously).
peter303 32 minutes ago 0 replies      
After a few years of reading many people recognize words in their entirety, not as individual letters. They note the length, parts with upward extenders such as bdhk, descenders like gjp and rounded parts etc. This is akin to reading ideographs.I wonder how this interacts with dyslexia?
gnicholas 2 hours ago 1 reply      
> There are three proposed cognitive subtypes of dyslexia (auditory, visual and attentional)

In the US, unfortunately, many experts focus exclusively on the phonological aspects of dyslexia (which corresponds to the "auditory" description above). In other parts of the world, the understanding is broader and includes visual aspects also.

It seems that the narrower, U.S.-based conception of dyslexia goes back to some research done at Yale in 1996 [1], which is often summarized as "dyslexia is phonological, not visual". Because Americans have such a high opinion of Yale, educators/experts here like to parrot this sound bite, even if they don't fully understand the research or competing research conclusions. Researchers and experts outside the U.S. have a different view (and IMO are less influenced by a research report from Yale).

I have been especially curious about the visual impacts of dyslexia, because the technology I work [2] on is visual, and according to many people with dyslexia, it is extraordinarily helpful for them. Having heard repeatedly that "dyslexia is not visual", I was curious to know why a visual technology would have a materially beneficial effect for readers with dyslexia.

In conversations with dyslexia researchers, I have learned that there may be second-order effects of dyslexia that are visual even if the root causes of dyslexia are not visual. Basically, people with dyslexia dislike reading and therefore do not read much. This causes them to lag on a number of reading-related skills, including visual tracking. Since visual aids can improve visual tracking, they can help readers with dyslexia, even if they don't have a type of dyslexia that was originally caused by visual differences.

1: http://dyslexia.yale.edu/Scientific_American_1996.pdf

2: http://www.BeeLineReader.com

symbolepro 1 hour ago 0 replies      
Seeing this, I remember an Indian movie Tare Zameen Par (https://en.wikipedia.org/wiki/Taare_Zameen_Par).The film explores the life and imagination of Ishaan, an 8-year-old dyslexic child.
grondilu 3 hours ago 3 replies      
I had hard times reading this (about 10 times slower than normal text). What does that say about me?
petercooper 2 hours ago 1 reply      
This is like a written version of how I seem to hear other people. Even when actively listening, I frequently hear different words (whatever my brain wants to hear or is predicting it heard, perhaps) which I question and generally find I heard wrong.
okket 1 hour ago 0 replies      
Previous discussion: https://news.ycombinator.com/item?id=11218677 (1.5 years ago, 200 comments)
sannee 55 minutes ago 0 replies      
I have always imagined that dyslexia feels somewhat like mild dose of LSD. It's hard to describe, but the letters usually seem to move around (especially in peripheral vision) and one has to focus mostly on individual symbols making up a word instead of the usual words-magically-appearing-in-brain stuff.
kkotak 1 hour ago 1 reply      
I wonder how dyslexia affects people from non-english speaking world.
paulpauper 1 hour ago 0 replies      
Does dyslexia mean the letters are jumbled or is it just a difficulty reading fast and and poor comprehension?
nanoniano 3 hours ago 4 replies      
Didn't see the point on this post. Then I realised I have JS off.

cf.: https://twitter.com/1990sLinuxUser/status/97350916902105089

lasermike026 2 hours ago 0 replies      
This really makes the point. Perhaps exploring different pathways would help. Braille? Audio? Different colors or fonts?
agumonkey 3 hours ago 1 reply      
Lovely. I love to see how the brain can manage going through "hurdles". Gives insight in how our brains handle parsing. Very nice.

Also: it tames my pseudo ADHD and helps me focus a lot. I'm tempted to have this for all text.

mproud 2 hours ago 1 reply      
The comments are GOLDEN!

Read the back-and-forth about the definition of bravery, from Ben Tarr and the rest of the commenters.

k__ 1 hour ago 0 replies      
lol, doesn't seem to hard. Words are still easily readable with the middle letters mixed up.
Double_a_92 3 hours ago 1 reply      
It's actually quite easy to read such scrambled words, unless they are some uncommon technical term that I've never seen before.
muzzammildotxyz 3 hours ago 2 replies      
That was really good, can you please explain how did you do it?
nxsynonym 3 hours ago 0 replies      
didn't have too much difficultly, but now all the letters on every page are swimming.
pwdisswordfish 3 hours ago 2 replies      
I wonder if that page looks normal to dyslexic
lr4444lr 3 hours ago 2 replies      
It was not actually as hard as the author had probably thought. Relevant?[0]


Curls backdoor threat haxx.se
289 points by sohkamyung  9 hours ago   103 comments top 15
notacoward 3 hours ago 1 reply      
The bit about code signing is really important. The people who wrote distro package managers recognized this years ago, and ever since have at least tried to do the right thing. That can make getting code into those repos a bit cumbersome, but that's kind of the price you pay.

Unfortunately, most of the language- and environment-specific package managers have pretty much ignored that issue. There's too often no way to verify that the code you just downloaded hasn't been tampered with. Heck, half the time you can't even be sure it's a version that's compatible with everything else you have. It's a total farce.

Software distribution is too important, security-wise and other-wise, to leave it to dilettantes as an afterthought to other things they were doing. Others should follow curl's example, instead of just dumping code into an insecure repo or (even worse) putting it on GitHub with a README that tells users to sudo the install script.

ghgr 8 hours ago 6 replies      
Backdoors in software is one of those nightmarish scenarios that disturbs you until you just think about something else and kind of temporarily forget it (like nuclear war, killer bees, climate change or asteroid impact). Open Source just raises the bar but in no way solves these problems (for example OpenSSH's roaming feature/bug, OpenSSL and Heartbleed). In computing one can quickly become paranoid: software buttons for switching smartphones off (are they 'really' off?), always-on microphones, webcams (granted, you can cover it), blobs in your smartphone and home router, downloads from http://download.cnet.com (note the lack of https), winscp, putty from www.putty.org (again, no https, which is not even the actual site, but nevertheless the first result from Google). In Linux the landscape is slightly better, but do you really trust all those packagers? Do you really understand each line of code when you "git clone foo; cd foo; ./configure; make; sudo make install"?, and in X11 you can easily make a key-logger without even being root! [1]. And not even full disk encryption can protect us (Evil Maid [2]).

That's one of the reasons I'm skeptical of the Ethereum smart-contract concept. In theory it works, but in practice I'm not sure at all. The DAO heist was one early example of security bugs in smart contracts, but I fear they will become more common when malware developers turn to "contract-engineering".

[1] https://superuser.com/questions/301646/linux-keylogger-witho...

[2] https://www.schneier.com/blog/archives/2009/10/evil_maid_att...

angrygoat 7 hours ago 4 replies      
The sort of hypothetical security vulnerability here is likely to depend on undefined behaviour (buffer over-runs, subverting parsers, etc etc). Just another reason to continue moving over to safe languages, especially for the lower level bits of our stacks. HTTP is big and complicated, I'm much happier exposing Rust/Go/C#/... to it than I am exposing C to it.

In safe languages, backdoors must be far more explicit, so we close off the likely scenario posited here.

otakucode 27 minutes ago 0 replies      
It could simply be my lack of in-depth understading of curl, but wouldn't curl make for a pretty weird target for a backdoor? It doesn't serve content or remain running for long periods of time, does it?
bambax 6 hours ago 0 replies      
> Im convinced the most likely backdoor code in curl is a deliberate but hard-to-detect security vulnerability

Evil organisations and/or big government agencies are probably working on finding vulnerabilities and using them without reporting them.

That sounds more efficient and impossible to spot or prove, than trying to implement backdoors directly.

benmmurphy 4 hours ago 2 replies      
tinfoil: why is he suddenly writing a blog post on how curl is not backdoored? does this mean curl is backdoored but he can't say directly that it is backdoored :)
raesene6 7 hours ago 0 replies      
This is a very real threat for all software, open source and commercial, and hard to completely fix.

That said there are a number of possible mitigations and the fact that they're not more widespread is, to me, an indication that people who rely on software don't think that this threat is worth the trade-off of the additional costs or time that mitigating it would take.

For example :-

- Requiring packages signed by the developers for all package managers (e.g. https://theupdateframework.github.io/ ) . This would help mitigate the risk of a compromise on the package managers hosting, but we see many large software repositories that either don't have the concept or don't make much use of it (e.g. npm, rubygems, pip)

- Having some form of third party review of software packages. It would be possible for popular packages like curl to get regular security reviews by independent bodies. That doesn't completely remove the problem of backdoors but it makes it harder for one to go undetected. This one has obvious costs both in financial terms and also in terms of delaying new releases of those packages while reviews are done. There are some things which act a bit like this (e.g. bug bounty programmes) but they're not uniform or regular.

- Liability for insecure software. Really only applies to commercial software, but at the moment there doesn't seem to be much in the way of liability for companies having insecure software, which in turn reduces their incentives to spend money addressing the problem.

I'm sure a load of commercial software includes curl or libcurl, but if there was a backdoor in it that affected that sofware, I don't think the companies would have any liability for it at the moment, so there's no incentive for them to spend money preventing it.

Perados 4 hours ago 4 replies      
Sometimes I wonder, what would happen if one of these invisible heroes dies ? What would happen to Linux if Linus Torvalds dies? What would happen to curl is Daniel Stenberg dies?For curl for instance, only Daniel can sign a release. So what happens if he is not able to do so anymore? This is just a small example, but you get the idea. There is so much power under under these men that it sometimes gets very scary.
WalterBright 4 hours ago 2 replies      
Since one of the backdoor methods mentioned was introducing a memory safety bug (like a buffer overflow), one way to reduce the attack surface is to use a memory safe language.

The thing is, one can write memory safe code in C. The problem is the difficulty in verifying it is memory safe.

I've opined before that this is why, soon, people will demand that internet facing code be developed with a memory safe language.

davedx 7 hours ago 2 replies      
I'm reading a book called 'Nexus' at the moment. Last night I read a part where they are deliberately installing a backdoor in their own system. They do it by modifying the compiler itself to inject malicious machine code into the binary. It's hands down the best technical description of hacking I've ever read in a work of fiction - highly recommended.
snomad 3 hours ago 0 replies      
This is why servers /network should be configured to reject/prevent out bound calls by default, only allowing connections from a white list.
shp0ngle 4 hours ago 1 reply      
Was there ever a big, significant backdoor in any widely used open source software?

I don't mean a bug like heartbleed, but an actual intentional backdoor.

thinkMOAR 7 hours ago 1 reply      
As others write, this is not specific to curl and strict explicit egress filtering, the best (and imho only) safety, pain in the ass initially but avoid connections to any non whitelisted destination.
aidos 8 hours ago 3 replies      
I guess I'm a bit cynical but this seems hand wavy to me. (Note, I love curl and implicitly trust it).

The argument that it would probably take too much code and would be too obvious doesn't seem solid. I'm no expert in this area but curl sends data over a network and sometimes runs as part of a larger application. It seems like the big dangerous bits are there and it wouldn't take a major bug to send the wrong thing.

_pmf_ 7 hours ago 2 replies      
> No. Ive never seen a deliberate attempt to add a flaw, a vulnerability or a backdoor into curl.

That's exactly what someone who has deliberately put a backdoor into curl would say.

Google is suffering a meltdown as Gmail, Maps and YouTube go down thenextweb.com
28 points by pthomas551  16 minutes ago   6 comments top 6
dom0 3 minutes ago 0 replies      
Does not seem affect Europe.

In fact, right now YouTube loads far quicker than it has for the last seven to ten days, were it would take ages to load any YouTube page.

jjlane 11 minutes ago 0 replies      
Our site was using hosted libraries, google fonts, and google analytics. All of which seemed to be behind captchas, throwing CORS errors, and 503ing since this morning. Swapped out JQuery cdn for now.
octo_t 3 minutes ago 0 replies      
Clearly a malicious attack by Apple before the iPhone X announcement later </s>
user5994461 2 minutes ago 0 replies      
Works fine for me in the UK. Must be another of these Google issue that only affect a few % of the users.
Helloworldboy 10 minutes ago 0 replies      
Refresh google.com without caching. Logo is missing. There's something you don't see everyday.

Edit: Everything working fine for me again.

pthomas551 13 minutes ago 0 replies      
Even the jQuery CDN is down intermittently for us here in Chicago. Sometimes CSS for core apps like Calendar is not loading, either. Definitely something amiss.
Build a fast, secured and free static site in less than three hours fillmem.com
209 points by leoht  5 hours ago   92 comments top 29
douglasfshearer 1 hour ago 0 replies      
Neocities [0] does all of this!

- It's free (though you can become a supporter and get some extra benefits) [1]

- It's fast, since it uses it's own CDN. [2]

- It's secure, all pages support SSL, even with custom domains. [3]

- It has a command line tool [4] that can be wrapped to automate upload of pages, or used in a Git hook.

- It has a plethora of learning resources [5].

[0] https://neocities.org

[1] https://neocities.org/supporter

[2] https://www.youtube.com/watch?v=-i6wvix6buI

[3] https://blog.neocities.org/default-ssl.html

[4] https://neocities.org/cli

[5] https://neocities.org/tutorials

tbrock 1 hour ago 4 replies      
Holy cow. Is this an improvement? Remember in 1999 when you could get a static website up in 3 minutes using ftp?
alias_neo 3 hours ago 1 reply      
I host mine on Digital Ocean. I commit the code to github, dev branch is deployed to stg.mydomain and master is deployed to mydomain.

In terms of deployment, I use Caddy which, with ~3 lines of config will auto-TLS your site using Lets Encrypt, and handle renewing the certs for you each month. Caddy also automatically pulls your changes, builds them with hugo and deploys them with ~2 more lines of config.

It's the easiest solution (as a developer) I've come across where I just commit to Github and my blog is updated, and Caddy deals with my cert renewals.

Lord_Zero 2 hours ago 3 replies      
I wrote a similar blog post: https://tberra.com/aws/amazon/meta/2016/11/12/the-birth-of-a...

The main differences on mine are:

- I use Jekyll, which is ranked #1 in the static site generator space.

- Hosted on AWS S3.

- CloudFront in front of S3.

- Routing and aliases handled by Route53.

- Deployed using a tool called s3_websites (change detection only uploading generated files AND cloud front cache invalidation for only the changed objects).

- Coded in a Docker container via a cloud IDE called c9.io using the Ruby template.

- Generator and site files committed to a GIT repository hosted on AWS CodeCommit.

discreditable 20 minutes ago 1 reply      
I use Pelican and host on NearlyFreeSpeech.net behind Cloudflare. I really like using Pelican. I'm hardly a front-end dev and I found it easy to create a custom minimalistic theme that I'm quite happy with. No JS required! Fire up your network monitor and head to https://brashear.me/blog/2017/07/30/migration-from-octopress... It's pretty easy to see <500ms load times.

Octopress was getting to be a pain in my butt due to ruby dependencies being awful to deal with.

jwilliams 3 hours ago 1 reply      
After many different iterations, I'm now using netlify (with middleman). Really hard to beat.

GitHub pages are amazingly fast and a pretty good default choice. With cloudflare it's a pretty solid combo.

But netlify's awareness/integration between the content and the cdn is really compelling. Imagine they'll be able to do a lot more with it down the line too.

abricot 3 hours ago 2 replies      
I had really hoped that this was about building a static site, but again you'll end up with a blog.

I've created an actual static site myself, but it takes a bit extra - especially from the theme.

Also, I don't understand why you'd use Hugo with Github when it already supports Jekyll?

KirinDave 23 minutes ago 0 replies      
I really don't understand why we are seeing Yet Another Tutorial for Yet Another SSG on the front page of HN.
IgorPartola 1 hour ago 0 replies      
Is there a piece of software that creates nice looking static photo galleries? I have used a few and they all seem to suffer from design that is 10-15 years old.
czechdeveloper 4 hours ago 0 replies      
Or you can just use Gitlabs pages which can handle Hugo generation for you. I do that and I definitely did not spend so much time by it. I can also add new posts directly from Gitlabs UI, which is nice.
alfonsodev 1 hour ago 0 replies      
My preferred alternative is:

- mustache(command line) + html

- Firebase hosting (superstatic)

I just install a command line version of mustache for example [1] and run it over simple static templates:

`mustache data.json myTemplate.mustache > output.html`

I only need to install superstatic[2] locally if I want to debug a rewrite rule or redirect otherwise clean URLs work pretty well with a simple setting.

[1] https://github.com/janl/mustache.js/

[2] https://github.com/firebase/superstatic

trextrex 4 hours ago 4 replies      
As far as I know, Github pages doesn't support https for custom domains [1]. A better option for free hosting would be netlify, which supports Let's encrypt for custom domains.

[1] https://github.com/isaacs/github/issues/156

jpz 1 hour ago 0 replies      
Just to make mention in the comments in case people haven't seen the product, I've found Amazon's Lightsail (https://amazonlightsail.com) incredibly easy and cheap to launch a simple website upon, along with using their Route53 for DNS hosting.

Even though I know all the ins and outs of AWS, I really like this product for simple projects.

(I have no affiliation with Amazon)

kaushalmodi 3 hours ago 0 replies      
The author has done a great job of documenting all the low-level details about even how to use git to push stuff to the GitHub repo. Kudos for the excellent documentation.

I use Hugo + Gitlab + Netlify (free https). I use Emacs as my development environment, and Magit (https://magit.vc) has come as a boon to me. All the git shell scripting mentioned in this post reduce to few key strokes with the help of Magit. I'm not intending to divert the topic, but couldn't help mentioning that the Magit Kickstarter [1] needs some love.

Coming back to the Hugo topic, I believe that the 3 hours is a good practical estimate for someone who has never dabbled with git/github, domain control tweaks, CNAME, etc.

So don't take that 3 hour mention as a negative, and jump right into the post. Once you have the whole setup, updating your site is a simple git commit + git push (hardly a minute -- not counting the time it takes to gather content for a new post :)).

[1]: https://www.kickstarter.com/projects/1681258897/its-magit-th...

deepakkarki 1 hour ago 2 replies      
I wonder why people (many devs included) use stuff like wordpress to host simple blogs. static site generators are a blessing :)

I built my own custom static site generator (python + jinja2) for running my side project[1]

I just git push and Netlify picks it up. Simple, to the point and no JS.

[1] I run https://discoverdev.io , a "product hunt" for top engineering blog posts!

justinhj 1 hour ago 1 reply      
This is a neat tutorial. I use github pages with their built in Jekyll stuff which I'm semi happy with but I found Jekyll a big learning curve compared to Blogger which I was using before.

Curious why people want to serve static sites to users over https though.


falsedan 4 hours ago 1 reply      
Bit surprised to see this advertised as 3 hours (how long can it take to dump HTML into a gh_pages branch & push?). After reading, I see that there's a huge overhead in setup and configuration. The complexity seems way out of line with the result!
seanwilson 3 hours ago 2 replies      
I really like the speed of Hugo but I find the template language unintuitive and hard to read e.g. "if or condition1 condition2" and "lt 1 2". I wish you could swap it out for something else like Liquid templates...is that possible?

I'd just go for Netlify as well for hosting. It'll build Hugo sites for you when you push commits, they have a CMS you can connect with most static site generators, they deal with SSL setup for you and tons more features. Self-hosting anything eats up time and it wouldn't be as robust.

busterarm 1 hour ago 1 reply      
Damn, I must be doing something wrong...

It took me three months to rebuild https://www.forthepeople.com as a Jekyll site on a load-balanced cluster from WordPress.

testloop 1 hour ago 1 reply      
I also used Hugo for my site - https://testloop.co.ukIt's so simple.. and FAST. It's hosted on AWS S3 with CloudFront & Route53 so it's not free, but at a cost of around $1.20 per month, it's not far off.
ekianjo 4 hours ago 1 reply      
Its not self-hosted if you rely on Github for hosting.
mizzao 2 hours ago 1 reply      
Another nice feature on top of this type of setup is to set up CI hooks on your repo to check for broken links on every push, e.g. with https://github.com/gjtorikian/html-proofer.

Example: https://github.com/mizzao/andrewmao.net/blob/master/Rakefile

MatthewK 3 hours ago 4 replies      
Might be a little off-topic but hopefully relevant enough. I'm rather new to web dev but currently have a static website hosted on AWS S3. My current workflow is to code the HTML and CSS files using Sublime Text then upload these files into a bucket manually via the AWS console. Is there a more efficient way to do this? And is there a simple way to enable HTTPS?
secminion 1 hour ago 1 reply      
I'm using React Gatsby on my blog.

I have access to all the npm ecosystem. Is fast. No bloatware or weird code.

Styles with styled-components for easy maintenance.https://mateom.io

Deployed to an S3 bucket connected to CloudFlare.

I just type 'yarn deploy' and it builds my blog and pushed it. And I can commit everything to source control as they keys are in aws-cli

255kb 3 hours ago 0 replies      
This is the path I choose for most of my websites nowadays.Usually a combination of static generated website (I created my own generator), Firebase hosting (which offers the same free autogenerated SSL + url rewriting), and Cloudflare. Like this, it's fast, SEO friendly and most of all completely free.
therealmarv 2 hours ago 0 replies      
That's still pretty complicated IMHO. Write some markdown files in a directory structure, run mkdocs over it. Voil... indexed, searchable static website.
carlmr 1 hour ago 0 replies      
I love it! This is the part of the 90s I want back (not the dial-up)
Spacemolte 2 hours ago 0 replies      
Any particular reason why you don't install using homebrew?
IamNotAtWork 2 hours ago 2 replies      
Can someone explain the use for cloudflare at the end? I thought you can do the same thing just through your domain registrar?
Billions of devices imperiled by new clickless Bluetooth attack arstechnica.com
88 points by mcone  2 hours ago   39 comments top 10
bjt2n3904 56 minutes ago 1 reply      
Link to the whitepaper.


Part of the attack is on BlueZ's implementation.

> In BlueZs case, L2CAP is included as part of the core Linux kernel code. This is a rather dangerous choice. Combining a fully exposed communication protocol, arcane features like EFS and a kernel space implementation is a recipe for trouble.

sillysaurus3 56 minutes ago 3 replies      
Stuff like this is exactly why physical pentesting is so effective. If you sneak into a company and stick a raspi into a corner, nobody tends to notice a black box amidst a bunch of cables. But that black box can attack the dev machines in a variety of ways: it can be a honeypot wifi AP until someone accidentally connects to it, at which point you now have the creds for the real network. Then you can connect to the real network and look for workstations to attack. Or, as this article points out, you might be able to use a tricky bluetooth attack to get onto the workstations directly.

I'm not sure there is any way to protect against this. Physical pentesters tend to get caught less than 10% of the time. It's very easy to sneak into a building if you know what you're doing and have confidence. And "knowing what you're doing" generally consists of "dress up like a construction worker xor interviewee."

codedokode 18 minutes ago 0 replies      
From description of vulnerability in Linux Kernel bluetooth code:

> This function receives a configuration response buffer in the rsp argument, and its length in the len argument

> Each element it unpacks from the configuration response is validated and then packed back onto a response buffer, which is pointed to by the data argument.

> However, the size of this response buffer is not passed into the function

C developers are repeating the same mistake for years. Why don't they invent some type or class for safe work with memory buffers?

brndnmtthws 53 minutes ago 6 replies      
As a slightly related side note, I pretty much only turn on bluetooth when I actually need to use it (which is rarely, such as syncing my Garmin every now and then). It's a waste of battery power to keep it on, and Bluetooth is also often used to track people. For example, it's used by traffic monitoring systems to measure the speed of traffic[1] by storing and tracking the MAC address.

It would be nice if Android and iOS provided a convenient way to activate Bluetooth temporarily, only when needed.

[1]: http://www.tyco-its.com/products-and-services/urban-traffic-...

bauc 56 minutes ago 0 replies      
Is that why Google Play Protect was recommending to disable Bluetooth Share which seems to have caused a lot of issues for people. Turning it back on requires to reset all app preferences.
joe890 38 minutes ago 0 replies      
>It's already patched.

This refrain is tired and myopic.

We must operate with the assumption that like BadUSB, heartbleed, and this latest attack, there are likely devastating vulnerabilities present in all devices we use and actors may have the chance to exploit them before we ever become aware of them or have the opportunity to apply a patch.

jgaa 1 hour ago 2 replies      
So, I guess it's back to using wired headphones with the phone...
mpclark 39 minutes ago 0 replies      
I've noticed that, starting quite recently, Bluetooth has always been off every time I've gone to use it on my trusty old Nexus 5. I figured it was the sort of bug that tends to accumulate on old phones, but maybe not eh?
azinman2 25 minutes ago 1 reply      
What is the actual exploit? Article was very thin on details....
jasonmaydie 53 minutes ago 0 replies      
Chalk one up for Windows Phone. Security through obscurity, on a more serious note does the flaw happen because of a common opensource implementation?
House Address Twins Proximity paulplowman.com
145 points by edward  9 hours ago   77 comments top 16
jaggederest 5 hours ago 7 replies      
I live in a house right on the dividing line of Portland's neighborhoods.

On one side of a street, the house would be 1 N Graham St, on the other side of the street, it would be 1 NE Graham St.

Needless to say, some confusion occurs. In addition, many times locations are referred to as being on 39th and Graham, for example. So you must specify very carefully that you live at Number 39Northeast Graham St, "Not 39th and Graham, at the corner of Graham and Williams, on the Northeast side"

It's a bit of a hassle. But a nice neighborhood. My mirror-neighbors kindly forward me packages, and I return the favor. Saves everyone a lot of agony.

(This is leaving aside the area in Portland where, being east of the 'dividing line', but west of the river, the houses are numbered identically except with a leading zero. Many mapping systems truncate this leading zero. Ergo you end up ~15 blocks away)

From 63 NE Graham to 63 N Graham - 0.1 miles - https://goo.gl/maps/xPLNKLzWv652

From 10 SW Boundary St to 010 SW Boundary St - 98 feet - https://goo.gl/maps/FRCXuYKir3M2

cwmma 1 hour ago 1 reply      
I work in Boston MA which has this problem since it annexed a couple different other munis and didn't change the street names, so there are 2 different twin addresses to my work address, not a big deal except for the fact that for some bizarre reason google maps, which allows you save various addresses, doesn't save the zip code so when ever I'd get direction from work it would just pick whichever one is closest to the destination and give me directions from there.
larrik 1 hour ago 0 replies      
Different but related, and something I ran into trouble with over the weekend:

My grandparents live at 297 <Road B>, and to get there you have to take <Road A> off the main road in town. Well, the house immediately before you turn onto <Road B> is 297 <Road A>, but they have their driveway _and mailbox_ physically located on <Road B>. This means that it appears there are two 297 <Road B> even though the other house is on a different road!

This weekend was just the pizza guy going to the wrong house, but a few months ago it was the cops when my grandfather fell and got hurt. Not a great situation!

adambowles 5 hours ago 0 replies      
I used to live at a twin address, my parents still do. It was across the boundary of towns and the homes are 500m apart. Postcode is only 1 letter different at the end (e.g. AB1 1CD/AB1 1CE).

The only issues I remember was getting each other's mail, and we'd just walk over and post it manually to them.

We added a name to our property so we could use that in mailing addresses to help clarify.

tehwebguy 27 minutes ago 0 replies      
My address is 1234 Street, but just 6 houses away is 1234 Cross Street, and the house number is visible from Street.

Mail is rarely delivered to the wrong place but non-UPS/FedEx Amazon orders go to the wrong place every now and then.

OliverJones 5 hours ago 4 replies      
Hmm. In US the people who operate emergency dispatch (police, fire, ambulance) often spend political capital to get changes to street names and addresses that might cause them to send service to the wrong house.
pavel_lishin 1 hour ago 0 replies      
Queens, NY is full of weird, confusing addresses and intersections - ignoring the addresses for a minute, you can go down 30th Ave, take a right on 30th street, then a left on 30th drive. If you miss the house, take a right on 31st st, then hang a right on 31st ave - 30th street is a one-way, so you'll have to go to 29th street to make a right and then a right on 30th street again. If you miss that, take a left on 30th Rd, and give up on ever arriving.

And that's not even the most confusing block of streets.

ubermonkey 2 hours ago 1 reply      
I have lived for 17 years at 209 (blank) Street, and found it absurd and ridiculous that less than a mile away is 209 West (blank) Street. That's not twins, but it's so close that it's a real problem.

The most hilarious bit, though, was when someone moved into the other house and filed their change of address with the address "209 West Blank Not Blank", in a hamfisted attempt to remove ambiguity, but they got it exactly backwards. We got their mail for months.

chairmanwow 1 hour ago 1 reply      
I really enjoyed the writing style of the author. Great content.

Why don't we just use lat long coordinates or geohashes for addresses? The shit that delivery people have to put up with is truly ridiculous.

S_A_P 5 hours ago 1 reply      
My father in law owns property in west Texas. No address just gps coordinates. He gets packages sent to a place in a nearby town.
jzwinck 2 hours ago 0 replies      
I grew up in a neighborhood where one pair of streets intersected twice. This led to quite some confusion whenever a friend said "Meet me at Kiln & White." The two intersections were far enough apart that many people did not know there was a duplicate, and both were prominent junctions suitable for meeting other kids.
jlebrech 6 hours ago 2 replies      
this kind of thing has made me wish delivery firms would have a "don't attempt delivery i'll pick it up from the depot" option.
slyall 5 hours ago 1 reply      
Location on Google streetview


Unfortunately the sign on the house on the right is blurred out.

jlebrech 6 hours ago 1 reply      
and you should send that info to royal mail and tell them they should bump some numbers up.
d--b 6 hours ago 3 replies      
This is when you expect the top HN comment to be: "hey I live in one of these homes, AMA."

Fun post though. Something to add to "what developers should know about addresses"

Sinnesloschen 4 hours ago 2 replies      
I think kind of confusion could be solved with http://what3words.com/

It's a different way to look up locations through the entire world, using three random words you can find any address or location with in 10 feet.

The only downside I see is the three words are all English words. Which could be unfamiliar to non English speaking parts of the world.

Just think about how easy it would be teach your children where they live by memorizing just three words instead of House number, Street name, City, State, Zip code.

How security flaws work: The buffer overflow arstechnica.com
15 points by rbanffy  37 minutes ago   2 comments top
feelin_googley 10 minutes ago 1 reply      
Am I misreading or does this author fail to see any distinction between a. programming language and b. functions in a "standard" library.

The most "secure" programs I have ever seen are written in C. The reason they are so "secure" is not because of the language chosen, but because of the competence of the person who wrote them. He writes his own basic functions and uses very few from the "standard" C library.

Privacy Loss in Apple's Implementation of Differential Privacy on MacOS 10.12 arxiv.org
147 points by sohkamyung  13 hours ago   32 comments top 4
frankmcsherry 5 hours ago 0 replies      
While I like this work, the title may be misleading to people who think of "privacy loss" as something distinct from "differential privacy".

What they show is that as you use Apple's implementation, the differential privacy parameter grows (providing weaker guarantees as time passes). They don't show that they can bypass the mechanism and it's guarantees, just that Apple has rigged the implementation to decay the guarantees as you continue to use it (note: decay stops if you stop using Apple stuffs).

simonh 1 hour ago 0 replies      
So now Apple's privacy system is only stupidly more secure than everyone else's instead of absurdly more secure.

So 16 per day sounds like a lot more than 1 or 2 per day, but what do these numbers mean? Presumably 16 per day is a theoretical maximum if you were to generate every kind of privacy related data ever day. But is 16 really a lot? How high would that have to cumulatively go in order to be useful for extracting reliable info on an individual? Wouldn't the info collected on an individual still have to be associated with them? Frankly I'm not really able to determine any of that from the paper.

bugmen0t 3 hours ago 1 reply      
Funny that almost everyone in this threat seems to "get" Differential Privacy and thinks of it as a good tool. But when it was discussed for Mozilla Firefox everybody was appalled and enraged. (Thread at https://news.ycombinator.com/item?id=15071492)
mirimir 8 hours ago 4 replies      
Could someone please ELI5 how an "intimate" provider (such as Apple, Google or Microsoft) can collect any data ongoingly without eventual loss of privacy?
Watching Hurricanes Irma, Jose and Katia From 22,000 Miles Above Earth nytimes.com
70 points by jashkenas  3 hours ago   14 comments top 6
jashkenas 59 minutes ago 0 replies      
It's nice to see that folks are enjoying this it was a group effort to wrestle the 22.3 gigs of imagery frames over FTP and get them cropped, processed, overlaid and sorted in time.

For anyone who wants to dig deeper, the RAMMB branch of NOAA in Colorado maintains a page of GOES16 loops of the day: http://rammb.cira.colostate.edu/ramsdis/online/loop_of_the_d...

... and also runs a fancy imagery viewer where you can play around with different micrometer wavelength bands: http://rammb-slider.cira.colostate.edu/?sat=goes-16&sec=full...

carlmcqueen 1 hour ago 0 replies      
New York Times' ability to display data, information and stories visually on the internet is truly a wonderful standard to see.

I really enjoyed the layout of this page, where the world sits, how the colors in the background don't take away from the effect from the day/night transitions. It is just wonderful.

amelius 14 minutes ago 0 replies      
I was armchair-wondering what would have happened if North Korea would have tested their H-bomb in the eye of a hurricane instead, but I found this old broken HN post: [1]. It turns out not to be a good idea. Who would have thought? :)

On the other hand, I'm still curious if a "directed" explosion (i.e., not radial but say with only an x-component) could accomplish something.

PS: It is a pity that the early formation of the hurricanes is not visible in the video.

[1] https://news.ycombinator.com/item?id=698754

koolba 1 hour ago 4 replies      
These images are amazing. In particular the zoom in view on the Caribbean showing the direct hit to St Martin.

> In a NOAA reconnaissance mission, a plane flew through the eye wall to gather data on the storm, recording winds of 139 miles per hour at sea level.

I don't know who the pilot is for this but he's got bigger balls than me.

TeMPOraL 50 minutes ago 1 reply      
Wow, just wow. I didn't realize there finally are more cameras that can provide this image quality in space, beyond the one that is (was?) on the ISS. Also kudos to NYT for the way they put it together. I'm mesmerized by the globe video.
volkk 53 minutes ago 1 reply      
> In a NOAA reconnaissance mission, a plane flew through the eye wall to gather data on the storm, recording winds of 139 miles per hour at sea level.

What kind of planes can safely fly through a storm like this? Or is the eye a lot safer to go through?

A method for improving Milky Way exposures in light pollution lonelyspeck.com
236 points by sndean  14 hours ago   64 comments top 14
icanhackit 10 hours ago 5 replies      
> Shouldnt the image taken at ISO 6400 show more noise than the image taken at ISO 1600? If the exposures were the same brightness, yes. But these images were not taken with the same exposure. The first image started with twice as much shutter open time (30 seconds versus 15 seconds) and two stops more gain on the sensor (ISO 6400 vs ISO 1600). The result is that the first image has significantly more light data. This makes the final signal-to-noise ratio of the processed images higher on the first image. The higher the signal to noise ratio, the less noisy the resulting photo. So in this case it was actually better to overexpose and compensate later in post processing, despite the initial unprocessed images appearing unusable.

This is really neat. I've got a full frame camera with a sensor that has very good dynamic range and I'd learned to under-expose and then pull detail from the shadows in post processing as information can't be saved from blown highlights. This sort of flips that on its head, except the featured overexposed shot didn't have blown highlights, it just looked like it did. Instead it had a wealth of low-light data.

avenoir 34 minutes ago 1 reply      
Overexposing and underexposing (formally known as Bracketing) is typically a great technique in landscape photography too because it captures as much of the dynamic range as possible. Most DSLRs can bracket right out of the box. Set it up to take 3 exposures (1 underexposed, 1 normal and 1 overexposed) with about 2 or 3 stops in between. Then merge all 3 exposures in Photoshop and you'll get an image with a TON of information embedded in it for you to tweak in post-processing.

Also, while I have't tried this, I'd think that using an ND filter in light-polluted areas like this could help a little bit with astrophotography.

A_No_Name_Mouse 1 hour ago 0 replies      
Would it be possible to filter out the pollution by subtracting an out of focus image from the in focus one? The pollution is already spread out while the stars are point light sources. So wouldn't that leave the stars untouched while greatly reducing the light pollution?
teraflop 9 hours ago 9 replies      
While idly thinking about light pollution a while ago, I thought of a ridiculously impractical solution:

Mandate the replacement of all outdoor night-time illumination with LEDs that are pulse-width modulated at a low duty cycle. Synchronize them all to an accurate global clock (e.g. from a GPS receiver), so that for instance, all of the lights are simultaneously turned on for the first tenth of each UTC millisecond. Then an image sensor with a sufficiently fast global shutter could disable itself during every brief pulse of light, so that it picks up 90% of the incoming starlight, but almost none of the light pollution.

bemmu 2 hours ago 3 replies      
I have trouble visualizing how the view of the Milky Way as seen from Earth reconciles with pictures showing whole galaxies. Are we seeing it edge on? Is there any cool animation showing it first from the outside, then rotated to match how we see it in the sky from Earth?
StavrosK 7 hours ago 3 replies      
Here's something I've wondered about a lot, but haven't been able to find a definitive answer: Isn't ISO pretty much software gain control? Why ever increase the ISO on the camera instead of just taking photos at the native ISO and then post-processing the photo to a higher exposure? That way, you don't store the blown-out pixels (if you do overexpose).
anon1253 8 hours ago 0 replies      
Or get an IDAS-LPS, CLS or UHC filter (either on the lens or as a sensor clip on). Or stack a bunch of images with proper dark substraction. If you stack enough light sub frames you can efficiently reconstruct dynamic range. Of course the ETTR advice is good too, but you can go much further even with consumer equipment.
michrassena 1 hour ago 0 replies      
I'm quite surprised any images of the Milky Way are possible in such a light polluted environment and I'll be using these techniques when I can to see if I can replicate the results.
jsjohnst 11 hours ago 0 replies      
As a photographer who's shot the Milky Way a lot, I can say this is excellent advice.
e12e 9 hours ago 1 reply      
Interesting. I would've liked to see the result of 1600 iso at 30s too - just for comparison.

On a note about light pollution, the mention of sodium street lamps immediately made me think of filters - stars are suns, so should have wide spectra - why not just filter out the orange bit? Apparently I'm not the first with the idea (obviously):


https://petapixel.com/2016/12/14/purenight-filter-cuts-light... (has some nice with/without filter images)

johndoe90 7 hours ago 1 reply      
I wonder where are the star trails. Wouldn't 30s exposure make earth movement noticeable?
srrge 8 hours ago 0 replies      
This articles reminded me the wonders that surround us while we go on living our boring human lives, so easily forgetting about the marvel of life and nature. Thanks for sharing this.
sickrumbear 10 hours ago 1 reply      
Resource limit reached, anyone have a cached version?
basicplus2 10 hours ago 2 replies      
When i was a child all the street and business lights were turned off at about 11pm.

Now they are on 24/7

Incredibly wasteful, although i suppose it is safer to go about ones business at night.

It certainly gave the opportunity to view the stars even if one lived in city areas.

North American Regional Dialects and Accents aschmann.net
3 points by fern12  1 hour ago   discuss
Topicbox FastMails new product for teams fastmail.com
83 points by robert-boehnke  9 hours ago   51 comments top 7
wyc 36 minutes ago 0 replies      
I can't wait to see products like these spawn brand new professions like Email Summarizers to get the gist out of long threads, Department Librarians to organize and catalog discussions and events, and Communications Coordinators to train teams on more effective patterns of e-communication.

PMs unwittingly take all these roles today, but these tools will surely unlock further specialization.

_RPM 4 minutes ago 0 replies      
FastMail really needs a UX team and graphic designer.
cpr 56 minutes ago 0 replies      
Would there be any way to turn this into a Zendesk-like solution, using threads for each support inquiry, and assignments of some sort?

We're happy Fastmail email users, and can almost live with email for support, and barely use any Zendesk features other than assignments, internal notes, and various views. But those three simple ZD features we do use are critical.

azinman2 21 minutes ago 0 replies      
This just a mailing list, right? What am I missing?
mxuribe 3 hours ago 1 reply      
I'm always a big fan of org. wikis...but the challenge is some users slack off on creating/posting content...but this topicbox seems like users can simply draft/send an email, and bam its content for a sort of "wiki". This is a great idea, even if only for the low curve of easily "creating" organizational content! Kudos to fastmail!
EGreg 30 minutes ago 0 replies      
Aren't many businesses going to be leery of sending their group emails to yet another external domain to be archived? Where are the turnkey open source email servers after 40 years? It seems no one solved SPAM in a way that's compatible with sending out your own, but for receiving and internal stuff like this?
nanoniano 3 hours ago 3 replies      
What they have to launch is a cheaper service. Their prices are laughable, although I can understand that, given they have no competition.
Bats slam into buildings because they can't 'see' them nature.com
14 points by TrickyRick  4 hours ago   9 comments top 5
Isamu 24 minutes ago 1 reply      
I had read this as Bots slamming into buildings, and was looking for a discussion of lidar and stereo vision systems.

But won't lidar have the same issue, with a clean glass surface? Similar with stereo vision - if there are no features at the surface to correlate.

grondilu 24 minutes ago 0 replies      
That made me wonder how bats see water surfaces. Though I suppose not seeing them is less of a problem since they're not vertical :P
sailfast 22 minutes ago 1 reply      
Interesting hypothetical to track bat evolution in urban areas over time to determine if selection is occurring for bats whose sonar is more finely attuned for detecting diffused returns as actual signal for skyscrapers or, for that matter, improved actual sight to avoid the buildings in the first place.
theoh 8 minutes ago 0 replies      
This reminded me of(humor)http://www.mjt.org/exhibits/foundation_collections/depmori/d...

"The key idea in the Griffith hypothesis was that as the Myotis lucifugus emission increased in frequency, the emission actually crossed the thresholds from the extreme ultraviolet into the X-ray, thereby allowing the bat to fly unharmed through solid objects."

peterwwillis 13 minutes ago 0 replies      
TIL Some Hawkmoths shoot ultrasound from their junk to thwart bat attacks. http://www.pnas.org/content/112/20/6407.full.pdf
Draft of new Latin-based Kazakh alphabet revealed inform.kz
14 points by mparramon  1 hour ago   6 comments top 3
kmm 16 minutes ago 1 reply      
25 can't be enough, Kazakh has 9 phonemic vowels. I presume the six Latin vowel signs will still get diacritics?
supahfly_remix 20 minutes ago 0 replies      
Does this indicate a potential change of sphere of influence, like Turkish not choosing Arabic script and Vietnamese not choosing Chinese.
desireco42 22 minutes ago 0 replies      
Seems you already have cyrillic alphabet everyone uses.
       cached 12 September 2017 16:02:02 GMT