I know there's a package that claims to update the file ignore pattern to match the open project, but it really doesn't work well at all.
> Minor improvements to file load times
I didn't even realize there was room to squeeze out more performance here. Sublime Text is wicked-fast opening pretty much everything I throw at it.
If Sublime is going to acknowledge Package Control, why not just ship with it? I'm sure the Package Control folks would be glad to move their repo upstream.
I've moved between maybe half a dozen editors over the past half-decade, but I always end up coming back to Sublime.
But I wonder about what the "right" way to blend gradients really is -- the article shows how linear blending of bright hues results in an arguably more natural transition.
Yet a linear blending from black to white would actually, perceptually, feel too light -- exactly what Fig. 1 looks like -- the whole point is that a black-to-white gradient looks more even if calculated in sRGB, and not linearly.
So for gradients intended to look good to human eyes, or more specifically that change at a perceptually constant rate, what is the right algorithm when color is taken into account?
I wonder if relying just on gamma (which maps only brightness) is not enough, but whether there are equivalent curves for hue and saturation? For example, looking at any circular HSV color picker, we've very sensitive to changes around blue, and much less so around green -- is there an equivalent perceptual "gamma" for hue? Should we take that into an account for even better gradients, and calculate gradients as linear transitions in HSV rather than RGB?
This is why if you render vector graphics to a raster image at high resolution and then scale the image down (using high quality resampling), you get something that looks substantially thinner/lighter than a vector render.
This causes all kinds of problems with accurately rendering very detailed vector images full of fine lines and detailed patterns (e.g. zoomed out maps). It also breaks WYSIWYG between high-resolution printing and screen renders. (It doesnt help that the antialiasing in common vector graphics / text renderers are also fairly inaccurate in general for detailed shapes, leading to weird seams etc.)
But nobody can afford to fix their gamma handling code for on-screen rendering, because all the screen fonts we use were designed with the assumption of wrong gamma treatment, which means most text will look too thin after the change.
* * *
To see a prototype of a better vector graphics implementation than anything in current production, and some nice demo images of how broken current implementations are when they hit complicated graphics, check this 2014 paper: http://w3.impa.br/~diego/projects/GanEtAl14/
f(x+eps)/f(x) ~= eps f'(x)/f(x) + 1
f(x) = x^2.2f'(x) = 2.2x^1.2
f(x+eps)/f(x) ~= 1.2 eps/x + 1
Human response to light is not particularly well-modeled by a logarithmic response. It's --- no big surprise --- better modeled by a power law.
This stuff is confusing because there's two perceptual "laws" that people like to cite: Fechner-Weber, and Stephens's. Fechner-Weber is logarithmic; Stephens's is a generalized power-law response.
I investigated and wrote a post called "Computer color is only kinda broken".
This post includes visuals and investigates mixing two colors together in different colorspaces.
If you work on game textures, and especially for effects like particles, it's important that you change the photoshop option to use gamma correct alpha blending. If you don't, you will get inconsistent results between your game engine and what you author in photoshop.
This isn't as important for normal image editing because the resulting image is just being viewed directly and you just edit until it looks right.
In the course on computer vision in my university (which I help teaching) we teach this stuff to make students understand physics, but at the end of the lecture I'd always note that for vision it's largely irrelevant and isn't worth the cycles to convert image to the linear scale.
One thing not discussed though is what to do about values that don't fit in the zero-to-one range? In 3-D rendering, there is no maximum intensity of light, so what's the ideal strategy to truncate to the needed range?
As others commented the gamma scaling issues seem even more relevant.
Just please, don't use RGB color space for generating gradients. In fact, it's ill fitted for most operations concerning the perception of colors as is.
Interesting excursion: historically the default viewing gammas seem to have lowered, because broadcasting defaulted to dimly lit rooms, while today's ubiquitous displays are usually in brighter environments.
Then it immediately occurred to me that a toaster has some binary enumeration of the blackness level of the toast, like from 0 to 15, and this corresponds to a non-linear way to the actual darkness: i.e. yep, you have to know something about gamma.
It needs something that not only permits comparable overlays, but (perhaps with a third diff layer) also highlights the ugly/wrong pixels with a high-contrast paint.
A handful of images are only somewhat obviously problematic, but for most of the images, I really had to struggle to find undesirable artifacts.
If it's that difficult to discern inconsistent image artifacts, one can understand why so little attention is often paid to this situation.
Not just OS X. The majority of Linux games from the past 2 decades including all SDL and id tech 1-3 games relied on X server's gamma function. An X.Org Server update broken it about 6 years ago. It was fixed a few weeks ago.
The fund spends a lot on being actively managed, one manager received ~$60 million in bonuses in 2010. However, they won't reply when people ask if bonuses are actually financially beneficial.
https://tv.nrk.no/serie/folkeopplysningen/KMTE50009215/seson... @ 28:30
"Norway has pursued a classically Scandinavian solution. It has viewed oil revenues as a temporary, collectively owned windfall that, instead of spurring consumption today, can be used to insulate the country from the storms of the global economy and provide a thick, goose-down cushion for the distant day when the oil wells run dry."
Since then, the fund has grown six-fold.
At 4% a year that's $6,800 each in annual income. Not bad!
But as much as I'm rolling my eyes at their blanket statement, the spirit of "yes we can!" does way more for science and progress than naysay of critics.
Then there will be species conflicts. Merck people won't be able to mate with Novartis people because they'll be too different genetically.
The number of new drugs discovered per dollar of research has been dropping since 1960, and obvious explanations (like "the easy ones have already been found") turn out not to explain the phenomenon. [http://www.nature.com/nrd/journal/v11/n3/fig_tab/nrd3681_F1....]
This is something we should try to understand better, since it goes against the intuition that technology is an unalloyed good in scientific research.
I applaud the money they're spending, but the level of technophilia in the announcement gives me pause.
If you ask many GPs in the west they will tell you that a majority of illnesses they address are related to weight and diet. With an aging, increasingly overweight adult population alongside a sharp rise in child obesity come big consequences for health care over the next half century.
Im not saying its an easy task you can just throw money at, but if trends continue as they are now the health and economic impact on society will be huge.
Bar some sort of national catastrophe (for e.g. war, famine, disease) is it a crazy idea to think we could see a reduction in obesity levels? Are we simply resigned to the fact that we will just get bigger and bigger in the future?
I hope this money does not go into some sort of "in 100 years from now moonshot" as opposed to, we have huge urgent needs for money right now.
So what if a big leap in computational biology happened? Making faster machines is relatively easier and largely unregulated.
So you focus on simulating disease and some form of automation that tries to cure it. We have the problem of building these models for the computer to crunch on. So why not build them from people. Continuously monitor everything about someone. DNA, the various omics, self reports. All the while machine learning is trying learn these models. So other automations can change them.
So the first thing we need is a way to collect all this data. Itself a major medical breakthrough. How much data do we need to build the models? This seems to be the first breakthrough we need to even approach this.
For those comparing this to pharma companies. Pharma companies invest in drugs that they can make money with. It sounds as though this $3 billion is aiming at more general research and making it publicly available.
I wish Zuckerberg and Chan the best in this.
1. governments around the world probably spend hundreds of billions yearly on research in medicine alone. and zuckerberg wants to solve everything with his 3bn?
2. our current technology is not even close to good enough to make the kind of major breakthroughs needed to say we 'cured cancer'. for example, the biggest neural networks we trained have the order of 10bn parameters while the human brain has 100bn neurons, each, I'm guessing, having at least 10 parameters. similarly for very small scale technology.I think we need to tone down a bit the hype in AI and computing.
Archive.org link because JWZ dislikes HN- https://web.archive.org/web/20160818144913/https://www.jwz.o...
Whatever exists needs to be challenged continuously to keep existing. Any naive attempt to suppress all adversity forever will backfire.
Redd Foxx got to choose his fate, but what am I going to do while forced to sit around the hospital dying of nothing?
Find a socially acceptable alternative to disease before you eliminate it.
(Ok, to put it more clearly: get off my damn lawn and my damn planet you stupid non-exponential function understanding kids.. Please!)
It's called Facebook.
I guess it is easy to make empty statements if you make them apply to a far enough future. Mars colonies from Musk (still working on getting the colonists there in one piece of course), and all disease tackled!*
Mark Zuckerberg is worth 55 billion dollars. This is 5% of his net worth.
Mark Zuckerberg spent 20bn on Whatsapp. At his 28% shareholding in facebook that's a 5.6bn USD personal commitment.
The top 5 global pharma companies spent 42 billion USD on R&D in 2015 alone. Total pharma sector R&D is circa 200 yards. Every singe year. They aren't anywhere near "curing all diseases". This intiative would fund them for 5 days.
Very generous, but let's keep some perspective.
Companies like Facebook (and people like Mark Zuckerberg) actively avoid paying taxes whenever they can, in a lot of countries that, for example, have public healthcare and other public institutions that would normally benefit from these taxes.
It's a bit like repeatedly stealing some kid's lunch, and then making fun of the kid for her weakness while appearing strong (and stronger in comparison to the weak kid) and compassionate when the kid passes out and you carry her on your back.
But he is in a much better position to work on the curious problems of ever increasing political polarization in our new Post-Factual world.
If I were to guess, over the next century that problem is going to result is a vastly more misery than a slight speed up to medical technology could compensate for.
Yes, this is a commendable effort, but I don't think they have the smarts/money for this. Even at an investor/patron level.
For example: MS Office used to cost 4-500 euro for the average home user a few years ago. That was ridiculous.
If you have a small shop and 2000 Facebook page likes, Facebook rips you off each time you want to reach them.
Maybe market dictates these prices but then again, they would be in the position to dictate the prices in the first place.
I don't know whether their passion loses momentum inside 'initiative/fund' or it was doomed to be opposite of it's cause from the start?
EDIT: Before anyone replies with "Why would you want that?", it's fairly common to stage a project privately on GitHub before publicly releasing it. It'd be nice to see that the license is detected correctly before going public with it. As it is now, I don't know what'll happen until I go public, and my first public commit in the repo may well be fixing up something minor to get the license detected properly.
It's hard living with a debilitating medical condition that doesn't have good treatments or a clear cause. It's even harder when the doctor says "I think it's all in your head" instead of "sorry, we really don't know how to treat this yet." That sort of consistent dismissal/borderline victim-blaming from real doctors is what I think pushed my mom toward bogus alternative health practices. There appears to be nothing medically valid about chiropractors but at least they don't call you crazy just for telling them about the experiences you've been having.
Naturally, if consensus were established between nodes, using something like this would be unnecessary, but it turned out to be an interesting way of optimizing lookups in a DHT.
When I grew up to be a programmer, never had much problems with concurrent stuff.
IMO designing concurrent programs is conceptually similar to building complex high-throughput low latency railway networks in the game.
Concurrency can result in increased maintenance costs and complexity.
Concurrency is also not more efficient on a single core.
Concurrency can help with latency and response time.
In embedded systems in particular, there is an over-use of concurrency which often results in bloated, complex code.
 short url provided for insanely long paywall avoiding link in the nyt OP
I'm happy to answer any questions about it.
"Quantum teleportation" is a process of information duplication using particles that already exist, and have been positioned such that there is enough distance between them that we can rule out direct interaction between them (that we know of given the current state of physics). Quantum teleportation is the process by which we then manipulate only the particles on one side of the distance divide, such that particles on the other side "end up" reflecting the same state that the ones we manipulated were.
Although "ending up" is probably the wrong term, because we use particles in special states of which we already know they're entangled, then split them up (which does not cancel entanglement) and then we make use of their entanglement property: running an algorithm involving particles on one side should yield the exact same result as running the same algorithm on the other side, so a much more interesting algorithm is one that you run on one side in one way, and on the other in a different way, to effect a "data copy" without ever actually copying data (and very much without any kind of teleportation. The fact that you run your process with "the same particle" is the special part. Being able to even have two particles that are literally the same is a pretty bizarre bit of physics)
For those thinking that this is a step towards faster-than-light (FTL) communication: As far as I know it's fairly certain that quantum entanglement will not allow for FTL communication. Basic principle is that while measurements between both sides will be correlated, it's not possible to tell how they are correlated until both sides compare measurements.
Given that, it seems like the touted benefit of using quantum entanglement here is in securing communications, since your measurements will no longer correlate if a third party is also measuring? At least, that's what I gathered.
I'm sure they did nothing of the sort. At best they transferred an unknown state of a photon to another photon six kilometers away, then confirmed via measuring both.
> The challenge was to keep the photons arrival time synchronized to within 10 pico-seconds,
> Since these detectors only work at temperatures less than one degree above absolute zero the equipment also included a compact cryostat, said Tittel.
The dark fiber seems like it was important for synchronizing the clocks. And while they claim this could be used for encryption keys, that is really a roundabout way of saying that very little information was actually transmitted/received, although the article doesn't say exactly how little.
If this technology was refined, you'd just use this system to send secure messages without the need for an encryption key.
My understanding is this:
1. send entangled bits to two separate locations A and B
2. determine their states (which will always be exactly opposite) on both sides
3. send data using classical means from A to B, xor'd with the quantum measurements
4. decrypt the data at B, with: data ^ quantum_measurements ^ 111111 (the last step being "invert all bits and ^ representing xor).
If that's it, how is this much better than sending block of identical entropy to A and B, and using a OTP? Theoretial tamper-proofness?
In short, quantum entanglement is an effect that causes two quantum particles to share state instantaneously over arbitrary distances. It can not be used to transmit information faster than the speed of light, essentially because while it is possible to manipulate the particle at one end, it is not possible to arbitrarily set it to a chosen state (and as described fully in the no communication theorem).
Quantum teleportation is a way to transmit quantum information, ie the quantum state of a 'qubit', using both quantum entanglement and a classical communication channel. Because classical communication is required, no faster than light communication is possible. However, quantum teleportation is necessary if you want to transmit quantum information.
To very briefly sum up how it works, you start with a qubit whose state you want to transmit, along with two entangled particles, and a 'receiving' qubit that will receive the state of the sending qubit. Through an interaction between the sending qubit and the entangled particle on the sending side, the quantum state of the entangled particles is set to one of four possibilities. Which of the four possibilities resulted is sent via the classical communication channel from sending to receiving end. The receiving end then uses that information, along with the receiving-end entangled particle, to manipulate the receiving qubit into the identical state as the sending qubit, thereby 'teleporting' that state from sending to receiving end. The Wikipedia article has a more thorough layman's description, as well as the underlying math.
Caveat: I'm an engineer, not a physicist, so I may have made a mistake here as well, but the main take-away is that quantum teleportation is not the same thing as quantum entanglement, and its purpose is not FTL communication, but rather communication of quantum states.
 https://en.wikipedia.org/wiki/No-communication_theorem https://en.wikipedia.org/wiki/Quantum_teleportation
But, there are quotes in this article to the contrary. e.g.:
Such a network will enable secure communication without having to worry about eavesdropping, and allow distant quantum computers to connect, says Tittel.
Was my understanding mistaken?
Isn't dark fibre either unlit capacity or leased fibres?
So, i stopped reading :)
The media plays fast and loose with terminology either out of plain ignorance or the desire to sell a story.
Cue endless discussions such as: What does teleport REALLY mean?
I recommend you read this instead, which provides a more level-headed and technically correct analysis of the vulnerability (which was there, even if not properly in the terms described by OP):
"Old news. This was fixed in 6.0.5.
Interesting note: The author is part of the rotor browser fork that is going no where so far. Doesn't look like the reported issue has been fixed there. In fact, no commits since before this blog post."
At least when I connect to Microsoft, Google, Facebook, etc. I don't expect to get hit by a driveby JS exploit, and Google does help with "safe browsing".
With Tor, you're one HTTP website (or not HSTS website) away from a driveby virus, with no way to tell that you're connecting to a dangerous exit node
This didn't used to be a problem, as it was essentially run as a sandbox project for the academic anonymity community. It was very up front about its capabilities and limitations.
Unfortunately, in recent years, the US government has been bankrolling more "privacy" software development through its propaganda arms (OTF, RFA, etc.), and the Snowden revelations have led private foundations to follow suit.
As such, the organization doubled down on rebranding to be a "human rights" _tool_, as this is what grant giving organizations love to promote (free speech in Iran, activist publishing, etc.) This combined with a overly-enthusiastic do-gooders gaining more and more prominence in the Tor organization has led to the dangerous situation of promoting inherently insecure software as a security solution to vulnerable people. This is a general problem in the scene (remember when those activists in South America got vanned for using CryptoCat?) - and one that I've been guilty of myself in the past.
I really hope the new boards steers them back to the academic realm and slaps a big red USE AT YOUR OWN RISK warning on the tin. Unfortunately, I think the opposite will happen.
Seriously? That seems like a really weird - to say the least - decision to make about something this important...
The real issue with scientific publishing is that there is simply no penalty for publishing shoddy research. I know several academics who made quite a big name for themselves on research that was later partially or fully retracted. No one cared about that; there was no real reputational damage done. To tackle poor science, such "poor" scientific inquiry should be "punished" in some way. Similarly, it is terrible for the advancement of science that only novel or significant results get published -- there should be a way for researchers to benefit from publishing well-designed research which simply did not yield interesting results.
How to do that? I think some of Axelrod's tournament provides an answer. Like in his examples, the individual incentives align to yield a pretty poor outcome to the members of his population (he runs an iterated prisoner's dilemma game). However, correctly setting up the iteration parameter's, slowly a cooperative strategy becomes the evolutionarily stable strategy.
I can see how this could also be the case for academics. There is no law from up above that dictates that "number of papers published" is the ultimate metric of success. There is a culture, and processes, and institutions which have led that to be a leading indicator of academic success. If there were real motivation and impetus to change this, there is no reason to imagine that other metrics (and processes) could emerge that would much more highly value scientific integrity and thoroughness.
> Worryingly, poor methods still wonalbeit more slowly. This was true in even the most punitive version of the model, in which labs received a penalty 100 times the value of the original pay-off for a result that failed to replicate, and replication rates were high (half of all results were subject to replication efforts).
How can bad results still confer a net reward on their producers with a penalty like that?
That is, nobody is particular looking for people doing poor work and handing out rewards. However, the "proper" methodology that we want necessitates time. Something that is expensive and there are already plenty of things eating at the budgets of work out there.
I do think this can be made better. But I see no reason to think it is just us chiding people for not doing better.
In art, we have studies that show that those who produce more also produce better. And it compounds. There is no reason to believe that at least some of this isn't operant in science.
The real problem is the lack of positive incentives for negative results. And I don't know how you fix that.
Conclusions based on low sample rates, should be seen as poor technique, an indicator of potential bias and suggest a lot more validation is required before acceptance.
Publishing the data seems like it would help, although might not be possible where there are privacy concerns.
In many of the hard sciences there is not a requirement to list the products used such as chemical reagents or plasticware in a given experiment.
If we don't address this, then the whole research publication process is on fire.
Furthermore, why are all the APIs so diverse? Why aren't there reactive operating systems (as in OS with reactive API)? All of these ideas can be explored in Rust but on some level I'm not sure what should be the feature set of the OS of the future.
The current driver models aren't that great either.
> The iretq instruction is the one and only way to return from exceptions and is specifically designed for this purpose.
Not quite true. STI; LRET works too, and it's faster for stupid reasons.
Also, the AMD architects blew it badly here. That quote from the manual:
> IRET must be used to terminate the exception or interrupt handler associated with the exception.
Indicates that the architects didn't think about how multitasking works. Consider:
1. User process A goes to sleep using a system call (select, nanosleep, whatever) that uses the SYSCALL instruction.
2. The kernel does a context switch to process B.
3. B's time slice runs out. The kernel finds out about this due to an interrupt. The kernel switches back to process A.
4. The kernel returns to process A's user code using SYSRET.
This is an entirely ordinary sequence of events. But think about it from the CPU's perspective: the CPU entered the kernel in step 3 via an interrupt and returned in step 4 using SYSRET, which is not the same thing as IRETQ. Oh no!
It turns out that this actually causes a problem on AMD CPUs: SYSRET will screw up the hidden part of the SS descriptor, causing bizarre crashes. Go AMD.
Intel, fortunately, implemented SYSRET a bit differently and it works fine. Linux has a specific workaround for this design failure -- search for SYSRET_SS_ATTRS in the kernel source. I don't know how other kernels deal with it.
Of course, Intel made other absurd errors in their IA-32e design , but that's another story.
> Unfortunately, Rust does not support [a save-all-registers calling convention]. It was proposed once, but did not get accepted for various reasons. The primary reason was that such calling conventions can be simulated by writing a naked wrapper function.
> However, auto-vectorization causes a problem for us: Most of the multimedia registers are caller-saved. [...] We dont use any multimedia registers explicitly, but the Rust compiler might auto-vectorize our code (including the exception handlers).
This seems like a pretty convincing argument in favor of supporting this calling convention explicitly: only Rust knows what registers it is actually using. The current approach devolves into preserving every register that Rust might possibly use.
AVX-512 has 2kb of registers alone! That's a lot of junk to save to the stack on the off-chance that Rust decides to super-auto-vectorize something.
There's hardware support to help with this; see "Task state segment" (16 and 32 bit x86 only, amd64 is different).
Note that you can build your own, with a raspberry pi and a GPS add-on board that sends a 1 pulse-per-second signal through the GPIO pins. Sample instructions here:
And the GPS board can be purchased here:
That [2005 study in Pakistan] compared the health outcomes from antibacterial soap and soap that was indistinguishable from and otherwise chemically identical to the antibacterial soap, but without triclocarban. Compared with a control group who received school supplies, children living in households who received soap and handwashing promotion had 52 percent less diarrhea, 50 percent less pneumonia and 45 percent less impetigo. Impetigo, a skin infection, was a particularly important outcome, because laboratory studies had suggested that triclocarban would have antibacterial activity against the organisms that most commonly caused impetigo. There was, however, no difference in any of the health outcomes between children living in households who received the plain soap compared with children who received the antibacterial soap.
Related, it appears school supplies are not only ineffective against bacteria that cause diarrhea, pneumonia, and impetigo, but may cause these ailments.
This is why I moved our household over to the "Method" brand a few years ago (no triclosan). I'd happily move again if something else was safer, but I do enjoy foaming hand soaps.
Also why on our newborn I purchased Waterwipes (wiper + fruit juice only). They massively reduced diaper rash too compared to the Huggies branded wipes we were using before (and can be used on the face because they won't upset stomachs if consumed).
I like the FDA and am glad they exist, but feel like they were slow to act in this case. We've know for almost ten years (via peer reviewed science) that these compounds are unsafe and ineffective.
I've stopped shaking some people's hand's after seeing them "drizzle some water" for a second after, well, you know what.
I only know Scheme from reading SICP and enjoyed Clojure but hated the java/JVM part of it. I currently use Erlang for when I need concurrency/performant backends. But I'm not totally satisfied with it (for ex: the weak type system and records).
Edit: oh looks two programs I use all the time are written in Guile: GNU Make and WeeChathttps://en.wikipedia.org/wiki/GNU_Guile#Programs_using_Guile
Personally, it's not at all a problem. I'm not a technophobeI have an iPhone, use Facebook, etc.but it's done nothing but positively improve my life. I can keep in touch with friends around the world and work remotely from anywhere, all thanks to the beauty of these "distracting" technologies.
However, even though my life is heavily entangled with technology I certainly don't feel "addicted" to it. I have no problem going out into the wilderness for a week and having 0 contact with the world. I certainly don't interrupt conversations to check my phone.
If technology is hurting your life, the problem could just as easily be with you as it is with technology.
If I do anything at all that interferes with the quality of my sleep, I'm screwed, and The Stream will suck me in. Conversely, things that increase the quality of my sleep -- exercise, diet, not using gadgets after a certain hour -- all increase my ability to fight off Stream-induced distraction.
So that's my recommendation. YMMV.
Because of this I turn off all notifications apart from phone calls and text messages. I don't have app badge counters or notifications for email either, if it is that important the person would ring me.
First, I don't use any devices directly after waking up. I meditate for about 20 minutes upon waking and then try to read fiction for 40 minutes. So, all in, somewhere around an hour of no device distractions before starting my day.
Slack is one of the biggest interrupting factors while coding these days, more so than IRC ever was for me, so I try to have chunks of time during the day with it closed. This is something I've struggled with recently as co-workers always expect to be able to get in touch, but often I really need 30-60 minutes of uninterrupted focus for real tasks.
For personal things, I deleted Facebook and feel quite a bit better. I still scroll Instagram too much. I deleted the Twitter app from my phone and will only check it from time to time on the web. I try to turn on Do Not Disturb mode in the evenings, but it's hard when you have systems that potentially could go down and things could get escalated to you.
On top of that I try to take psychedelics a few times per year, not in any type of party settings, but with people that are close to me. Screens tend to turn up this feeling of disgust when I look at them in that state, so I automatically disengage with them. I find that for at least a short while after the trip my usage of distracting Internet things goes down a lot as well.
Interested to hear other strategies!
Your "virtual" life is also real. The news that you read, happen in real world. The people whom you talk to are real people. There's no "real life" and "non-real life". Everything that happens to you is "real life" by definition. That includes social networks etc.
The only thing that matters is whether it is a life that you're comfortable with, or not. And there is certainly a point that many people are not actually comfortable, but forced into comforming.
Being uncomfortable about it because it's "not real" is a fallacy, though.
Never been happier.
Right now I am back with an internet subscription and I'm miserable. I am an internet addict and having internet at home is very bad for me. I strongly consider cancelling my internet suscription again.
Silly as it will sound I'd never thought of it that way. Cunning , brilliant and cunning again.
It's an open source project either:
What I do is, essentially, the following: every day at around 0-1 am (before I go to sleep) I set a timer so that it would fire at 7-8 pm (that's when I would get home). This allows me to force myself from browsing unnecessary sites (facebook, twitter, HN, reddit etc.) that would otherwise distract me from doing meaningful work.
As for the other devices, not long ago I would have twitter, facebook, VK, and a few others social media apps on my iPhone. I deleted everything except twitter which I check rarely (<= 10m a day). Deleting the facebook app contributed greatly to my smartphone's battery life either.
I'm taking a vacation soon, 9 days with no computer will be a much needed reset.
The author instead seems to be taking his personal opinions about technology and trying to apply them to everyone. It's not enough that he doesn't like these things. We should not like them either and here's why. Time with my kid is invalid because I also have the TV on? GPS leads me to stop remembering things? Emojis are unsuitable replacements for voicemail? The author imagines restaurants where smartphones must be surrendered upon entering. Cool, you've described a place I'll actively avoid. Where's that "stop liking things I don't like" GIF when you need it?
"Our enslavement to dopamine?" -- how about YOUR enslavement to dopamine? Leave me out of it.
There's also the ethical question around people not being aware of their obsessions/addictions (especially in the mobile gaming space)
>The world is too much with us; late and soon,
Getting and spending, we lay waste our powers;
Little we see in Nature that is ours;
We have given our hearts away, a sordid boon!
This Sea that bares her bosom to the moon;
The winds that will be howling at all hours,
And are up-gathered now like sleeping flowers;
For this, for everything, we are out of tune;
It moves us not. Great God! Id rather be
A Pagan suckled in a creed outworn;
So might I, standing on this pleasant lea,
Have glimpses that would make me less forlorn;
Have sight of Proteus rising from the sea;
Or hear old Triton blow his wreathd horn.
I'm not saying I have a life but I'm hoping to get one real soon now.
> ... trying to describe what I was feeling. The two words extreme suffering won the naming contest in my head...
Lloyds of London, Berkshire Hathaways National Indemnity, XL Catlin etc.
Basically they are buying a policy from one of those companies adding 20% and selling it to you.
A insurance company works by spreading risk over a large area. By selling everything in NY they are increasing their correlation which raises risk. One of the reasons for the sub prime crisis was no one expected housing to fall in all markets at the same time.
I would like a non-profit that just pays back the spare money even more. I do not know how this would work with regulations. I guess in Germany this could be done through a "Genossenschaft", which Wikipedia tells me has an US equivalent called co-op. Would this actually work?
Edit: Realized that you could just grant discounts as there cannot be a profit anyway. Would be awesome to see several companies with the same model, first competing on prices and ultimately the percentage of the fixed fee.
But the "P2P" branding is likely going to be confusing to a lot of people. In fact even after reading the explanation I still don't understand the peer to peer model in this context and I know what Peer to Peer means.
There are 8 steps to get a quote
1. First and last name2. Full address3. question (renter/owner)4. roomates/alarm5. current owner of insurance?6. Jewelry over 1000$?7. email, birthday8. Quote, which seems highly generic and could be done without 6 of the 7 previous steps.
I can't even imagine the conversion rate from just checking it out to paying customer, it can't be too high at all (outside the founders circle).
ZipCode -> Quote should be the only step. The rest should happen after you convinced me about your value. By the way, don't email firstname.lastname@example.org, it's not really my email.
how does that compare with the profit margins of a traditional insurance company?
With So-Sure you link up with friends and are bonused if nobody claims. Of course that means nobody links with that friend that always loses their phone, which in theory reduces their risk and pays for the bonus.
There are charities and causes I do support and there are those that I don't. There are charities that oppose each other in their stated goals as well.
I see a section that talks about becoming a supported charity, but nothing about criteria or who is already in.
Kudos to the author for the excellent writing, and to the Sheldons for sharing their story. Speaking as a white man who grew up 8 blocks from the apartment mentioned in the article, it's both surreal and saddening to learn how drastically different (meaning unfair) their lives were and continue to be.
Please read more (in spanish): http://www.infobae.com/politica/2016/09/14/bonadio-ordeno-de...
So nice to read this! It's a pity that ideals like these are things from the past, completely gone in our western, capitalist culture. Words like those now mostly invoke feelings of cynicism and snarks about socialism or communism.
We have never owned a crib or bassinet by the bedside even.
I may be wrong, but I remember hearing it was during Victorian times that this became popular.
I know there are people afraid of smothering or crushing their infants. Most negative studies seem to come out of the US. Other countries don't seem to have the same issues cited in US studies. I think it is common sense not to sleep with your baby in bed if you have waterbed, or tons of pillows, a soft mattress and a fluffy comforter.
My wife and I don't drink alcohol either, so no possibility of being so drunk that we would be unaware of smothering our baby, and neither of us is hypermobile in bed. All mammals keep their young close. The 'co-sleeping' is a modern term that makes it sound out of the ordinary, but in fact it is very ordinary.
People will say they slept better and the baby got used to being in another room, or across the room, but I can say from three babies worth of fathering, they get comfortable real quick being in bed with their parents.
A great deal are so fiscally short sighted, misguided, and downright hostile, that it really takes away any hope for the "average American". HN demographic should represent some of the best of the best of the US, and well... i guess this kinda explain the rise of Trump.</rant>
Joking aside, this looks like a fantastical practical approach to social support. No complex and very expensive program, just a very practical "starter set" for parents. It relieves both a possible financial burden and of course makes sure, that nothing immediately important item isn't present at a time when parents probably think about a lot of things but not shopping.
We even got the pram, cot and car seats (plural!) for free. It's crazy.
Any clothing we bought was just because we liked it and not because we needed it. Seriously, I don't know how is it in northern cultures, but where I come from it is extremely common to get the clothes from your cousins (and pass them on to the next :)), regardless of your money (unless you are rich, in which case I don't know how it works :)).
I'm not sure if this is a Latin thing, Southern European thing, Mediterranean thing or what :)
Edit: My point being: Corruption (as mentioned by others in different countries) and the very strong family support networks in other cultures might be the reason why this doesn't exist in other European countries.
If you have twins, you're allowed three boxes---if triplets, six. We had twins so we opted for two boxes and the money.
Yes, we did use the box. Yes, we did use all the clothes, lotions, toys, etc. In fact, our son basically wore nothing except eurogarb for his first year.
There are some commercial versions of this available in the US, though they're relatively pricey. The fact that this is a public benefit in some places is truly awesome, and something I wish could be politically tenable here in the US.
Apparently spurned by the article, a few startups have appeared offering similar boxes in other countries.
You might be interested in some of the comments here, from 3 years ago. https://news.ycombinator.com/item?id=5817728
 - http://www.washingtontimes.com/news/2014/oct/3/editorial-the...
If this topic is interesting to you and it is a discussion you want to get involved in, please do email me on email@example.com. All welcome!
We are building a powerful group of people to advocate for the transformative impact of FinTech in developing economies, and we'd love to start a conversation with you too.
is the rest of the title.
Also, the bank I work for allows mobile banking for 11 countries in Africa. 11/54 is not bad if you ask me.
Having mobile banking services is possible of course but actual banking services are more than tracking my latte spend. The guy who founded Bank Of America walked into San Francisco the day after 1906 earthquake with a wheelbarrow of cash and started lending to the shop and business owners who needed to rebuild immediately.
That's the kind of banking service Africa will need in the next few decades, and New York or London or SV are not planning on an app that can do that.
So the consumer banking apps, they will come, but a banking infrastructure, funnelling loans into real businesses and infrastructure - that's a different kind of banking. One it seems we forget.
So, why are they reluctant to just issue their band-aid patch to the BIOS -- after all, it's really the path of least resistance here?
Yes, there has been some deflection of blame here. The argument that every single OS except Windows 10 is at fault for not supporting this CRAZY new super advanced hardware doesn't make much sense.
"Linux (and all other operating systems) don't support X on Z because of Y" doesn't really apply when "Z modified Y in a way that does not allow support for X."
To state it more plainly, this "CRAZY new super advanced hardware" has a trivial backwards compatible mode that works with everything just fine, but it is blocked by Lenovo's BIOS.
It was a shame to see the initial posts this morning hit the top of the page without any more evidence than a single customer support rep. who was unlikely to realistically have inside knowledge of some kind of "secret conspiracy" to block linux installs by Microsoft.
- MS shouldn't be blamed based on what the CEO of Lenovo says, let alone what a tech or BB rep says.
- MS shouldn't be blamed for new crimes based on past behavior
Why care about MS or any other megacorp? Because this salem witch trial shit is toxic and should not be condoned against anyone.
Rush to suspicion and demanding answers is great. There is no downside to saving blame for after the facts are in.
Obviously vigilance implies some amount of false positives. It is easy to dismiss a problem once better information is available. It's great that this Lenovo situation is simply a misunderstanding about drivers, but that doesn't invalidate the initial concern about a suspicious situation.
Garrett should be condemning Lenovo for not making a perfectly configurable chipset feature....configurable and defending Linux and freedom of choice on hardware that has always traditionally been that way. But, no, he doesn't. He defends stupidity as he always does.
Why would anyone buy their stuff?
The fact that Linux got caught in it is just collateral damage.
The modder that flashed the custom BIOS was able to boot linux on his first try.
However, I don't agree with conclusion that Lenovo isn't to blame. They went out of their way to ensure that even power users playing with EFI shell won't be able to switch to AHCI mode.
I don't care about Microsoft here. Lenovo showed its bad side and I probably won't be buying their devices anymore - which is a pity, as I'm writing this on my Yoga 2 Pro, with my company's Yoga 900 (fortunately older, unblocked revision) nearby and I liked those devices.
How about we pay some attention to the second part of:
Lenovo's firmware defaults to "RAID" mode and ** doesn't allow you to change that **
Sorry to be that guy, but the elitism is pretty misplaced anymore...
The only way to convince these folks it seems would be a smoking gun or even better a signed confession from satya and lenovo admitting to shady behavior.
Since that's not how shady behavior works in the real world presumably many here are supporters of the camel in the sand approach with a zero tolerance policy towards non conforming camels.
"For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule."
This is a really poor argument, and slightly disingenuous. Sometimes, people change their use for a device. Maybe they want to explore linux in the future, maybe they want to sell the laptop to someone who wants to use it for linux...
That the blame is being possibly misdirected ought not to detract from the fact that blame is necessary. If users don't vocally oppose measures like this, the industry will assume that this kind of restriction is reasonable. It's not. Yes, power management is important, but anyone who puts linux on their laptop will quickly learn there are limitations to the features of that device that were originally tailored to the OS the device shipped with. That's a good lesson, and a good opportunity for a community to develop around the device (if it's good enough) to mitigate those deficiencies and adapt them for the particular linux distro.
In short, Lenovo is at fault for not being up front about this limitation, for not explaining it, and for not devoting at least some resources to mitigating for their potential linux-inclined users.
Then again, perhaps a linux-inclined user might also be one of the many that don't trust Lenovo after their self-signed certificate scandal.
When I first heard this I thought this was what I always thought would eventually happen with Secure Boot, but we haven't quite arrived at that point yet. Give it time when we eventually end up with critical mass.
What seems to be happening is similar to Kaby Lake in reverse. You restrict hardware and drivers needed to no only exclude other operating systems, which is a side-effect, but build in hardware obsolescence so you'll find yourself without any drivers or means to upgrade in future Windows versions.
You don't lock any chipset into an idiotic RAID mode in a laptop with a single disk. To claim this is done for power management reasons over perfectly standard AHCI is so stupid it isn't even funny, but Matthew likes defending stupid. He swears blind that Secure Boot will always be in a position where it can be turned off and Microsoft will not try anything on with key access. Because, you know, Microsoft's word and all that.
Be in no doubt, Microsoft and OEMs together want us all to throw away hardware and upgrade more regularly.
It seems to address a known pain point in bcrypt (max length), implements a pepper in a secure way (which cannot inadvertently degrade security), and is otherwise doing things which are best practices (high work factor, per user salt, etc).
I know peppers remain controversial (some people claim they're pointless, and make a good argument). But ultimately nothing Dropbox is doing with peppers in this article makes your password easier to break, only harder.
I'd call this scheme 10/10.
Years ago I relieved myself from the stress by using a password manager. Now for all I care they could be storing it in plaintext and it wouldn't make a damn difference to me. Problem solved.
Ok, fair enough...
The debate over which algorithm is better is still open, and most security experts agree that scrypt and bcrypt provide similar protections.
... wait, what?
Seems like the combination of strong hash + encryption on a HSM is the way to go these days. Dropbox's scheme looks good to me.
I would like to know if "salted-bcrypt"+SHA512 hashing is really safer than using just SHA512 (e.g. because of the risk of making locating hash collisions easier, etc.).
I have heard of it being a separate, ip restricted server with daily changing ip address, etc. A simpler use case would be to store oauth2 tokens or some kind of PII
This part confused me. How can truncating to 72 bytes be a more severe reduction in entropy than generating a 64-byte hash?
I don't get this point. Why is it harder to rotate pepper for a hash compared to an encryption key?
There are actually two problems with bcrypt:
- It truncates after 72 characters - It truncates after a NUL byte
Additionally, if you're going to use AES-256, don't implement it yourself. Use a well tested library that either uses AEAD or an Encrypt then MAC construction.
Don't get me wrong, what's described there is super-important to secure the authentication of today, but what about a word for the authentication of tomorrow?
There already are various solutions. Passwordless is a familiar one for nodejs, and I recently bumped into the promising Portier, which is, according to its authors, a "spiritual successor to Mozilla Persona".
bcrypt is known to choke on null bytes. Each SHA512 hash has a 25% chance of containing a null byte if you use the raw binary format.
Using hex or base64, of course, decreases the amount of entropy that you can fit into bcrypt's 72-byte limit. But you can still fit 288 to 432 bits of entropy in that space, which is more than enough for the foreseeable future.
Huh? BCrypt works by stuffing the password into a 72 byte Blowfish key and using it to recursively encrypt a 24 byte payload. Either it's truncating, or it's pre-hashing the password to fit much like they are.
The link they use to justify it is funny: http://arstechnica.com/security/2013/09/long-passwords-are-g...
That's just a naive PBKDF2 implementation that's pointlessly reinitializing the HMAC context each iteration instead of just doing it once at the start. The difference between storing a 1 byte and a 1MB password with PBKDF2 should be on the order of a couple of milliseconds.
As a result, we're probably going to have a bunch more issues like this one: http://blog.ircmaxell.com/2015/03/security-issue-combining-b...
I'm not looking forward to having to talk people off that particular ledge for the next several months...
Here are some cool things you might not notice:- We were born pre-AWS and actually run our own data centers, have an ASN and manage our own network, host 2PB of data that is geographically replicated in near-real time, and have successfully defended against 200Gbps+ DDoS attacks.
- Weve put a ton of care into bringing all of the pieces together (websites, ecommerce, email marketing) in a super integrated and seamless way. Check out, for example, how you can customize all of the store emails with Weebly Promote (email marketing), when you send out an email campaign you can automatically track sales generated from that email, how we automatically import and create smart groups -- like frequent customers who havent purchased recently -- or how we will even recommend pre-created emails based on actions you take adding new products, putting products on sale, etc.
- The eCommerce platform has been significantly upgraded, with things like real-time shipping (UPS, FedEx, USPS, DHL integrations), abandoned carts, gift cards, a re-built tax & shipping engine, a new store front & checkout, bulk editing and power seller features, and a whole bunch of other cool stuff.
- Check out the apps for iOS and Android. It was pretty hard engineering work to get a full live editing experience with a fast, native UI that need to ultimately render down to a slow WebView (no one else that were aware of has been able to pull this off like this).
- Weve built a web code editor (similar to Mozilla Thimble from a few days ago) thats pretty nifty. Create a site, then go to Theme>Edit HTML / CSS (screenshot: https://www.dropbox.com/s/ry5aeykn1l56l17/Screenshot%202016-...)
- Here are some of the cool new themes: https://highpeak-theme.weebly.com/, https://verticals-business-slick.weebly.com/, https://pathway-financial.weebly.com/, https://urbandine-business.weebly.com/, https://jaysims-oasis-merch.weebly.com/, https://oikos-test.weebly.com/
Our ultimate goal is to create a platform that small to medium creators of all kinds can use so they can focus on what they love doing, and less on the business of running their business. Imagine all the time spent learning from awesome people like patio11 -- what if we could make the whole online side of running your business a whole lot easier? Thats the dream, this is the first step in that direction.
Happy to answer any questions and would love your feedback!
Excellent photography is great, but many of the small businesses who are primarily targeted with these services, don't have it. And stock photography looks like, well, stock photography.
I'd love to see a service like this embrace themes that don't depend on great photography. Themes that make good use of typography and other non-image visual elements.
I've been surprised by how important and effective email still is to most small businesses, and it's a hard problem to solve; having it integrated with the rest of your site and commerce solution is even harder. Ecommerce is a more obvious need, but has a lot more solutions available, including for non-technical folks.
Congrats on launching cool new stuff all these years later!
Chrome Version 53.0.2785.116 m, Windows 10
The big questions that come to mind:
1. How does this compare with hosting Wordpress. I like that with Wordpress, if some issue comes up, you can find someone who knows the innards and program what you need. Does the user have that level of access?
2. How does it compare with WIX, and the other site building competitors?
3. What if someone has built some great piece of code and I want to install some of the functionality on my site, can I do it?
It just seems that it is a closed enough system that I have to rely on Weebly engineers to do everything.
Can we bring our own HTML/CSS and integrate with Weebly? Even with custom payment flows?
Weebly looks promising for that, and at $25-50/month isn't too bad.
For people more familiar with this area, what other options are there? Wordpress+shopify? And how do the fees stack up with the different options?
For example, Shopify has "metafields" which allow Apps to add strings of data within a namespace/key, which can then be used in the liquid templating engine.