Is sleep included in the 12.5 hours per day?
Even if it isn't, I and everyone I know sits for at least that long each day. Definitely for more than 10 minutes at a time. If we need to interrupt our sitting bouts every 7.5 minutes in order to be healthy, I don't know how the hell to do this.
Get up, go get a drink, do a lap around the office. Get a stand/sit desk. Do some simple exercises. Pushups can be done anywhere, and will alleviate lower back pain.
For an accessible but informed intro-level text, I recommend _Psychology of Reading_, by Rayner, Pollatsek, Ashby, and Clifton. (I took intro grad-level cognitive psychology from Rayner and Pollatsek.) One anecdote I remember from the first edition involved a subject who couldn't perform left-to-right saccades; she was dyslexic in English, but wouldn't have been in Hebrew or Arabic.
I simply can't tell the difference between symbols that have been rotated or reversed. So I made it through an engineering degree with "the alligator eats the bigger number" mnemonics for > and <.
The first thing I do when I get a new dev machine is take a pen and write "\r\n" up above the delete key. I've never gotten the slashes in the right direction on that key sequence from memory even though I type them several times a day (I just had to edit my post after looking it up right now and realizing I'd typed it wrong on my new laptop).
Anything that can be reversed, I reverse roughly 50% of the time. Because to my brain, they're interchangeable.
Often, key letters will just completely disappear. In their place is...a kind of grey blank that your mind jumps over. You could swear there's something there, and you can see it when you move your eyes quickly across the word, or out of the corner of your eye when you read the previous or next word, but it disappears when you steady your eyes on that word.
I had to "concentrate" in-order to understand each letter, something I don't have to do with normal text. I can't imagine how difficult it would be to constantly have to read everything like that.
Also, while most words were easy to make out, the ones that I don't use in everyday life; like "Typoglycemia" were impossible to figure out. I had to check what it was linked to.
But now and then - maybe once a month, maybe a few days in a row, some normal, easy, word will look _wrong_.
The letters don't "jump around", but it looks wrong the way a missplelled word does...I end up staring at it and trying to imagine how it SHOULD be spelled. That'd be less weird for me if it was a word with lots of typical english weirdness, like "necessary", but this happens on really _simple_ words, and usually just one word at any moment. Then later, that particular word stops doing it.
In recent history I can recall this happening with: "tree", "the", "matter", and (ironically) "simple", but I've never noticed any pattern to which words do it, and these are words that only do this for a few minutes or hours, then stop. "the" just looked as wrong as "teh" normally looks wrong, and every instance on the page looks like a glaring error until it subsides.
Does anyone know what does this? It's not a notable problem for me - because these are simple words that I have a lot of familiarity with I can just logically override the emotional component, but it still weirds me out. What else can my brain do this with, making something normal and mundane wrong and alien for a brief period?
FWIW, English is my native language.
The way to describe my experience is that when you read sentences, you are sometimes surprised by what you read because it seems wrong, re-read it and find out that's not what it said at all. The correction I might possibly make on this site would be to have webcam input and change only when your eyes are not looking at something and to make the change more subtle so you're not aware of it in your peripheral vision.
Day-to-day (not big) issues are:
* Having to re-read paragraphs because I read it wrong and therefore failed to understand it.* Coming unstuck in a point I'm making because I failed to read the text correctly.* Some fonts I really cannot read at any speed - basically if it differs too much from very standard computer fonts.* Given up writing lowercase in my own handwriting because I cannot easily read it - the workaround for me was to write completely in capitals.* Generally wanting to avoid reading because of the above issues.
* Able to spot mistakes in large bodies of text really quickly, but equally could be a result of just being a programmer.* Skim reading is easier because I have gotten used to getting words in the wrong order anyway, which almost the same as missing words.
Also, I find it funny that one test for whether you are dreaming is reading sentences in your dream doesn't make sense - I get this anyway :) Proof we are living in the matrix? ;)
Interested to hear if anybody else also experiences this and can even enlighten me a bit.
Weirdest thing - I can read that website pretty easily. I can see the words. I think it's because I look for more markers than just the word shape when I read. I don't know. How do others find it?
In the US, unfortunately, many experts focus exclusively on the phonological aspects of dyslexia (which corresponds to the "auditory" description above). In other parts of the world, the understanding is broader and includes visual aspects also.
It seems that the narrower, U.S.-based conception of dyslexia goes back to some research done at Yale in 1996 , which is often summarized as "dyslexia is phonological, not visual". Because Americans have such a high opinion of Yale, educators/experts here like to parrot this sound bite, even if they don't fully understand the research or competing research conclusions. Researchers and experts outside the U.S. have a different view (and IMO are less influenced by a research report from Yale).
I have been especially curious about the visual impacts of dyslexia, because the technology I work  on is visual, and according to many people with dyslexia, it is extraordinarily helpful for them. Having heard repeatedly that "dyslexia is not visual", I was curious to know why a visual technology would have a materially beneficial effect for readers with dyslexia.
In conversations with dyslexia researchers, I have learned that there may be second-order effects of dyslexia that are visual even if the root causes of dyslexia are not visual. Basically, people with dyslexia dislike reading and therefore do not read much. This causes them to lag on a number of reading-related skills, including visual tracking. Since visual aids can improve visual tracking, they can help readers with dyslexia, even if they don't have a type of dyslexia that was originally caused by visual differences.
Also: it tames my pseudo ADHD and helps me focus a lot. I'm tempted to have this for all text.
Read the back-and-forth about the definition of bravery, from Ben Tarr and the rest of the commenters.
Unfortunately, most of the language- and environment-specific package managers have pretty much ignored that issue. There's too often no way to verify that the code you just downloaded hasn't been tampered with. Heck, half the time you can't even be sure it's a version that's compatible with everything else you have. It's a total farce.
Software distribution is too important, security-wise and other-wise, to leave it to dilettantes as an afterthought to other things they were doing. Others should follow curl's example, instead of just dumping code into an insecure repo or (even worse) putting it on GitHub with a README that tells users to sudo the install script.
That's one of the reasons I'm skeptical of the Ethereum smart-contract concept. In theory it works, but in practice I'm not sure at all. The DAO heist was one early example of security bugs in smart contracts, but I fear they will become more common when malware developers turn to "contract-engineering".
In safe languages, backdoors must be far more explicit, so we close off the likely scenario posited here.
Evil organisations and/or big government agencies are probably working on finding vulnerabilities and using them without reporting them.
That sounds more efficient and impossible to spot or prove, than trying to implement backdoors directly.
That said there are a number of possible mitigations and the fact that they're not more widespread is, to me, an indication that people who rely on software don't think that this threat is worth the trade-off of the additional costs or time that mitigating it would take.
For example :-
- Requiring packages signed by the developers for all package managers (e.g. https://theupdateframework.github.io/ ) . This would help mitigate the risk of a compromise on the package managers hosting, but we see many large software repositories that either don't have the concept or don't make much use of it (e.g. npm, rubygems, pip)
- Having some form of third party review of software packages. It would be possible for popular packages like curl to get regular security reviews by independent bodies. That doesn't completely remove the problem of backdoors but it makes it harder for one to go undetected. This one has obvious costs both in financial terms and also in terms of delaying new releases of those packages while reviews are done. There are some things which act a bit like this (e.g. bug bounty programmes) but they're not uniform or regular.
- Liability for insecure software. Really only applies to commercial software, but at the moment there doesn't seem to be much in the way of liability for companies having insecure software, which in turn reduces their incentives to spend money addressing the problem.
I'm sure a load of commercial software includes curl or libcurl, but if there was a backdoor in it that affected that sofware, I don't think the companies would have any liability for it at the moment, so there's no incentive for them to spend money preventing it.
The thing is, one can write memory safe code in C. The problem is the difficulty in verifying it is memory safe.
I've opined before that this is why, soon, people will demand that internet facing code be developed with a memory safe language.
I don't mean a bug like heartbleed, but an actual intentional backdoor.
The argument that it would probably take too much code and would be too obvious doesn't seem solid. I'm no expert in this area but curl sends data over a network and sometimes runs as part of a larger application. It seems like the big dangerous bits are there and it wouldn't take a major bug to send the wrong thing.
That's exactly what someone who has deliberately put a backdoor into curl would say.
In fact, right now YouTube loads far quicker than it has for the last seven to ten days, were it would take ages to load any YouTube page.
Edit: Everything working fine for me again.
- It's free (though you can become a supporter and get some extra benefits) 
- It's fast, since it uses it's own CDN. 
- It's secure, all pages support SSL, even with custom domains. 
- It has a command line tool  that can be wrapped to automate upload of pages, or used in a Git hook.
- It has a plethora of learning resources .
In terms of deployment, I use Caddy which, with ~3 lines of config will auto-TLS your site using Lets Encrypt, and handle renewing the certs for you each month. Caddy also automatically pulls your changes, builds them with hugo and deploys them with ~2 more lines of config.
It's the easiest solution (as a developer) I've come across where I just commit to Github and my blog is updated, and Caddy deals with my cert renewals.
The main differences on mine are:
- I use Jekyll, which is ranked #1 in the static site generator space.
- Hosted on AWS S3.
- CloudFront in front of S3.
- Routing and aliases handled by Route53.
- Deployed using a tool called s3_websites (change detection only uploading generated files AND cloud front cache invalidation for only the changed objects).
- Coded in a Docker container via a cloud IDE called c9.io using the Ruby template.
- Generator and site files committed to a GIT repository hosted on AWS CodeCommit.
Octopress was getting to be a pain in my butt due to ruby dependencies being awful to deal with.
GitHub pages are amazingly fast and a pretty good default choice. With cloudflare it's a pretty solid combo.
But netlify's awareness/integration between the content and the cdn is really compelling. Imagine they'll be able to do a lot more with it down the line too.
I've created an actual static site myself, but it takes a bit extra - especially from the theme.
Also, I don't understand why you'd use Hugo with Github when it already supports Jekyll?
- mustache(command line) + html
- Firebase hosting (superstatic)
I just install a command line version of mustache for example  and run it over simple static templates:
`mustache data.json myTemplate.mustache > output.html`
I only need to install superstatic locally if I want to debug a rewrite rule or redirect otherwise clean URLs work pretty well with a simple setting.
Even though I know all the ins and outs of AWS, I really like this product for simple projects.
(I have no affiliation with Amazon)
I use Hugo + Gitlab + Netlify (free https). I use Emacs as my development environment, and Magit (https://magit.vc) has come as a boon to me. All the git shell scripting mentioned in this post reduce to few key strokes with the help of Magit. I'm not intending to divert the topic, but couldn't help mentioning that the Magit Kickstarter  needs some love.
Coming back to the Hugo topic, I believe that the 3 hours is a good practical estimate for someone who has never dabbled with git/github, domain control tweaks, CNAME, etc.
So don't take that 3 hour mention as a negative, and jump right into the post. Once you have the whole setup, updating your site is a simple git commit + git push (hardly a minute -- not counting the time it takes to gather content for a new post :)).
I built my own custom static site generator (python + jinja2) for running my side project
I just git push and Netlify picks it up. Simple, to the point and no JS.
 I run https://discoverdev.io , a "product hunt" for top engineering blog posts!
Curious why people want to serve static sites to users over https though.
I'd just go for Netlify as well for hosting. It'll build Hugo sites for you when you push commits, they have a CMS you can connect with most static site generators, they deal with SSL setup for you and tons more features. Self-hosting anything eats up time and it wouldn't be as robust.
It took me three months to rebuild https://www.forthepeople.com as a Jekyll site on a load-balanced cluster from WordPress.
I have access to all the npm ecosystem. Is fast. No bloatware or weird code.
Styles with styled-components for easy maintenance.https://mateom.io
Deployed to an S3 bucket connected to CloudFlare.
I just type 'yarn deploy' and it builds my blog and pushed it. And I can commit everything to source control as they keys are in aws-cli
Part of the attack is on BlueZ's implementation.
> In BlueZs case, L2CAP is included as part of the core Linux kernel code. This is a rather dangerous choice. Combining a fully exposed communication protocol, arcane features like EFS and a kernel space implementation is a recipe for trouble.
I'm not sure there is any way to protect against this. Physical pentesters tend to get caught less than 10% of the time. It's very easy to sneak into a building if you know what you're doing and have confidence. And "knowing what you're doing" generally consists of "dress up like a construction worker xor interviewee."
> This function receives a configuration response buffer in the rsp argument, and its length in the len argument
> Each element it unpacks from the configuration response is validated and then packed back onto a response buffer, which is pointed to by the data argument.
> However, the size of this response buffer is not passed into the function
C developers are repeating the same mistake for years. Why don't they invent some type or class for safe work with memory buffers?
It would be nice if Android and iOS provided a convenient way to activate Bluetooth temporarily, only when needed.
This refrain is tired and myopic.
We must operate with the assumption that like BadUSB, heartbleed, and this latest attack, there are likely devastating vulnerabilities present in all devices we use and actors may have the chance to exploit them before we ever become aware of them or have the opportunity to apply a patch.
On one side of a street, the house would be 1 N Graham St, on the other side of the street, it would be 1 NE Graham St.
Needless to say, some confusion occurs. In addition, many times locations are referred to as being on 39th and Graham, for example. So you must specify very carefully that you live at Number 39Northeast Graham St, "Not 39th and Graham, at the corner of Graham and Williams, on the Northeast side"
It's a bit of a hassle. But a nice neighborhood. My mirror-neighbors kindly forward me packages, and I return the favor. Saves everyone a lot of agony.
(This is leaving aside the area in Portland where, being east of the 'dividing line', but west of the river, the houses are numbered identically except with a leading zero. Many mapping systems truncate this leading zero. Ergo you end up ~15 blocks away)
From 63 NE Graham to 63 N Graham - 0.1 miles - https://goo.gl/maps/xPLNKLzWv652
From 10 SW Boundary St to 010 SW Boundary St - 98 feet - https://goo.gl/maps/FRCXuYKir3M2
My grandparents live at 297 <Road B>, and to get there you have to take <Road A> off the main road in town. Well, the house immediately before you turn onto <Road B> is 297 <Road A>, but they have their driveway _and mailbox_ physically located on <Road B>. This means that it appears there are two 297 <Road B> even though the other house is on a different road!
This weekend was just the pizza guy going to the wrong house, but a few months ago it was the cops when my grandfather fell and got hurt. Not a great situation!
The only issues I remember was getting each other's mail, and we'd just walk over and post it manually to them.
We added a name to our property so we could use that in mailing addresses to help clarify.
Mail is rarely delivered to the wrong place but non-UPS/FedEx Amazon orders go to the wrong place every now and then.
And that's not even the most confusing block of streets.
The most hilarious bit, though, was when someone moved into the other house and filed their change of address with the address "209 West Blank Not Blank", in a hamfisted attempt to remove ambiguity, but they got it exactly backwards. We got their mail for months.
Why don't we just use lat long coordinates or geohashes for addresses? The shit that delivery people have to put up with is truly ridiculous.
Unfortunately the sign on the house on the right is blurred out.
Fun post though. Something to add to "what developers should know about addresses"
It's a different way to look up locations through the entire world, using three random words you can find any address or location with in 10 feet.
The only downside I see is the three words are all English words. Which could be unfamiliar to non English speaking parts of the world.
Just think about how easy it would be teach your children where they live by memorizing just three words instead of House number, Street name, City, State, Zip code.
The most "secure" programs I have ever seen are written in C. The reason they are so "secure" is not because of the language chosen, but because of the competence of the person who wrote them. He writes his own basic functions and uses very few from the "standard" C library.
What they show is that as you use Apple's implementation, the differential privacy parameter grows (providing weaker guarantees as time passes). They don't show that they can bypass the mechanism and it's guarantees, just that Apple has rigged the implementation to decay the guarantees as you continue to use it (note: decay stops if you stop using Apple stuffs).
So 16 per day sounds like a lot more than 1 or 2 per day, but what do these numbers mean? Presumably 16 per day is a theoretical maximum if you were to generate every kind of privacy related data ever day. But is 16 really a lot? How high would that have to cumulatively go in order to be useful for extracting reliable info on an individual? Wouldn't the info collected on an individual still have to be associated with them? Frankly I'm not really able to determine any of that from the paper.
For anyone who wants to dig deeper, the RAMMB branch of NOAA in Colorado maintains a page of GOES16 loops of the day: http://rammb.cira.colostate.edu/ramsdis/online/loop_of_the_d...
... and also runs a fancy imagery viewer where you can play around with different micrometer wavelength bands: http://rammb-slider.cira.colostate.edu/?sat=goes-16&sec=full...
I really enjoyed the layout of this page, where the world sits, how the colors in the background don't take away from the effect from the day/night transitions. It is just wonderful.
On the other hand, I'm still curious if a "directed" explosion (i.e., not radial but say with only an x-component) could accomplish something.
PS: It is a pity that the early formation of the hurricanes is not visible in the video.
> In a NOAA reconnaissance mission, a plane flew through the eye wall to gather data on the storm, recording winds of 139 miles per hour at sea level.
I don't know who the pilot is for this but he's got bigger balls than me.
What kind of planes can safely fly through a storm like this? Or is the eye a lot safer to go through?
This is really neat. I've got a full frame camera with a sensor that has very good dynamic range and I'd learned to under-expose and then pull detail from the shadows in post processing as information can't be saved from blown highlights. This sort of flips that on its head, except the featured overexposed shot didn't have blown highlights, it just looked like it did. Instead it had a wealth of low-light data.
Also, while I have't tried this, I'd think that using an ND filter in light-polluted areas like this could help a little bit with astrophotography.
Mandate the replacement of all outdoor night-time illumination with LEDs that are pulse-width modulated at a low duty cycle. Synchronize them all to an accurate global clock (e.g. from a GPS receiver), so that for instance, all of the lights are simultaneously turned on for the first tenth of each UTC millisecond. Then an image sensor with a sufficiently fast global shutter could disable itself during every brief pulse of light, so that it picks up 90% of the incoming starlight, but almost none of the light pollution.
On a note about light pollution, the mention of sodium street lamps immediately made me think of filters - stars are suns, so should have wide spectra - why not just filter out the orange bit? Apparently I'm not the first with the idea (obviously):
https://petapixel.com/2016/12/14/purenight-filter-cuts-light... (has some nice with/without filter images)
Now they are on 24/7
Incredibly wasteful, although i suppose it is safer to go about ones business at night.
It certainly gave the opportunity to view the stars even if one lived in city areas.
PMs unwittingly take all these roles today, but these tools will surely unlock further specialization.
We're happy Fastmail email users, and can almost live with email for support, and barely use any Zendesk features other than assignments, internal notes, and various views. But those three simple ZD features we do use are critical.
But won't lidar have the same issue, with a clean glass surface? Similar with stereo vision - if there are no features at the surface to correlate.
"The key idea in the Griffith hypothesis was that as the Myotis lucifugus emission increased in frequency, the emission actually crossed the thresholds from the extreme ultraviolet into the X-ray, thereby allowing the bat to fly unharmed through solid objects."