Instead, in construction, they use a system of checks to ensure that different experts consult one another so that every decision is reviewed by a relevant expert.
I suspect that the "chief architect" approach that Brooks advocates may have become obsolete as well since the Mythical Man Month was written. Perhaps software developers could learn something from the newer methods that replaced the "master builder" model in construction.
Cathedrals are not, generally speaking, profitable. They represent the expenditure of lots of capital over a long period of time.
Bazaars don't cost much to start. You can start quite small and have a functioning system that does useful things for people. They can grow quite large, and when they grow too large it becomes difficult to find exactly what you want without a really good map. But you can probably quickly find a bunch of things that are more or less close to what you want.
Cathedrals are not easy or cheap to repair, but the investment is so large that people usually prefer to repair them. A bazaar that doesn't work out makes some local people sad, but they will go to another bazaar that is a little less convenient for them, and perhaps do better there.
It's nice to have some cathedrals, because they feed the soul. But you need to eat every day, so there will always be bazaars, and if you need to make a choice, the bazaar is going to win unless you have a lot of resources stored up to fall back on.
Most of the original developers have long since moved on, there are design problems, various teams and managers rebuild or duplicate work, and management sometimes imposes big changes just before release.
Software quality is hard to judge from the outside, and takes longer to build.
The amount of productivity available to Mr. Kamp for free today is conservatively double or triple that available in 1999. Databases, web frameworks, scale know-how, IDEs, hosting platforms, the list goes on.
He harkens back, sadly, to an era in which codebases like Genuity Black Rocket cost $100k in licensing, and ran on $30k/month Sun servers. Seriously.
Languages are faster, development times are shorter, and chips are WAY faster. And, code can be pushed out for tinkering and innovation onto github for free. Combine that with his estimate that we have 100x more people in computing, and the combination is a riot of creativity, crap, fascinating tech and everything in between.
The bazaar is messy, but I'm not aware of any solid critiques which show cathedrals are more efficient at the multiples-of-efficiency kind of gains we get from legions of self-interested, self-motivated coders.
At the very least, I would love to see companies created around popular open source tools and verticals to create designed end-to-end experiences. Download, double-click, start coding, and see something on the screen.
In a sense Windows was a cathedral. It bent over backwards while accepting the new. Welcoming many under the umbrella.
Sure it is a stretch, but the Windows platform (for years) required backward compatibility. That was the world Microsoft worked in.
Today, the OS still maintains cathedral aspects, but the company is embracing the bazaar way more than I ever thought I would see in my lifetime.
It is true that configure scripts are probably doing some useless things, "31,085 lines of configure for libtool still check if <sys/stat.h> and <stdlib.h> exist, even though the Unixen, which lacked them, had neither sufficient memory to execute libtool nor disks big enough for its 16-MB source code", etc. But then what is the alternative? Writing a configure module each time by every programmer who wants to release some software? This is called code reuse and yes, it's not perfect but it saves time. By not reinventing the wheel again and again. By reusing something that is stable and has been there for some time. Probably such thing is generalizing over many architectures, and making useless things, but then again, who cares for some extra 5-10 seconds of the "configure" command, when you are covered for all those strange corner cases that it already handles?
1) pay for the cathedral
2) use the bazaar, possibly needing to seed it
3) do without
Since 1) is typically very expensive and lacks guarantees of suitability to purpose and continued existence, the ROI is so massively negative few can even try it. 3) often isn't an option.
the bazaar is an inevitability. No one has been mandating it for past decades. No laws enforce it. It occurs cause it is the only available/possible option.
unix is the bazaar enabler. An OS that is so simple as to be mostly useless without third party additions but also without enough of an overall design to give the third parties much of a guide on how things should fit together.The mess that unix enabled is also the reason we can't get away from it and move on to better architectured systems.
More like carefully designed procedures (not the software kind), meetings, and suit ties.
The waterfall process can be as technically deficient as you want it to be, just like agile.
Of course, the "Bazaar" included the project named "Linux," which does have a single person in charge. So "single person in charge (or not in charge)" isn't really what the bazaar is about.
More generally, the article laments the lack of quality in modern software. I don't think that's a problem of Cathedral vs Bazaar, though, since software designed top down with authority telling everyone what to do can be low quality (indeed, you can see this sort of thing happening in many scrum style companies).
Rather low quality software is a reflection of the skill of the people who built it.
Maybe compiling everything on every computer is not the best approach...
At work I'm happy to use Linux, but I use it for things it's great at, such as simulating our prod environment, coding, etc, but at home I just don't want to think after a long ass day, I want to enjoy computing.
It's not always the ideal technical architecture for the problem.
Unix was fragmented long before 1991, when autoconf was created. Prior to autoconf, Perl's Metaconfig did this dance, and a lot of other, lower quality systems like imake were used to work around the problem. The forces behind Unix's fragmentation was actually not the Bazaar (open source software), but attempts by multiple parties to independently build their own Cathedrals (in attempts to differentiate/provide value with proprietary systems). Of course, autoconf, as a GNU system, was, in Cathedral & Bazaar terms, a Cathedral, not a Bazaar.
Autoconf is ugly/copy-paste code because it is comparatively rarely run, so as long as it produces a passable result, people use it and focus their efforts on other, bigger itches.
There's also a long history of open source products which have enjoyed higher quality implementations than their proprietary equivalent. The Apache project itself stood as a higher quality web server than a lot of other choices (not as true today, but it was true for quite some time).
One could go on...
- hacky vs. beautiful - professional vs. amateur - waterfall-designed vs. evolving - centralized control vs. consensus control - crufty vs. well-maintained - open-source vs. closed-source
Except epoll. And vsyscalls. And...
You know, maybe Linux isn't well managed.
But I like understanding the code, so I read and read and puzzled and fended off "hurry up" comments.
And then I added two lines and it basically worked.
I am in favour of a slow code movement (http://www.mikadosoftware.com/articles/slowcodemovement) which seems relevant here.
Even better was a guy I worked with, physics PHd from central China, whose daily stand up reports mostly consisted of "Inspent yesterday reading around the subject, I shall spend today, reading around the subject."
Love him, love the brass balls.
Yes, it's an inefficient pile of garbage built one heap on top of another.
But it's also incredibly useful.
Modularity and code reuse are, of course, A Good Thing. Even in the most trivially simple case, however, the CS/IT dogma of code reuse is totally foreign in the bazaar: the software in the FreeBSD ports collection contains at least 1,342 copied and pasted cryptographic algorithms. If that resistance/ignorance of code reuse had resulted in self-contained and independent packages of software, the price of the code duplication might actually have been a good tradeoff for ease of package management. But that was not the case: the packages form a tangled web of haphazard dependencies that results in much code duplication and waste.
When you have many, many independent teams of people developing software, you may want to implement something someone else has already done. As a member of your team, you have two choices: 1) re-implement the original as part of your own cathedral, or 2) put some glue around the existing solution.
Sometimes, 1) will be the best choice. But sometimes - and this is usually the case with Unix tools - 2) works just as well, and you get the benefit of only having to write and maintain glue, rather than a constantly morphing feature implementation. If the original was done well, and your glue supports it well, you get a feature without paying for it.
The foundation of Unix is a strong Cathedral. But this foundation is what makes the Bazaar fit so well around it: it provides something strong to glue other shit to. As long as you have someone to continue gluing shit together, you can keep adding pieces ad infinitum, and what you lose in the end is essentially disk space and compilation time. I'm willing to accept that.
"Autotools must die"
It's a bit like "normal" people sticking it to the elite. The rest of the world getting a piece of the cake too...
What most developers don't realise is the level of engineering strictness that goes into anything safety-related. The rules and regulations related to anything that affects the human body is in a different league than what most developers are familiar with.
What is a problem here, is that the design (not the code) apparently did not take into account any messaging security, relying on obscurity as its only defence.
If the code was open-sourced, don't expect to find lots of buffer overflow attack vectors, or simple things like that. Its the design of the system as a whole at fault, and that is already open.
Medical devices such as these are not black boxes to the people that certify them, everything is open to them, source included. Having worked in that sort of area, I trust the systems that are in place.
Open sourcing this code would do a lot to mitigate these issues.
It's mentioned in the article.
Not a rhetorical question.
Ask any question and you will find an answer. Any question you have, no matter how banal or left-field. What is the weather? Is my grandson a lesbian? How do I eat pizza in Italy?
Where is the biography section? How do I understand the Dewey Decimal System? What is in the Special Collections, and what are the hours -- and do I need an appointment? The computers are down... is there a way I can search for books offline without randomly roaming the stacks?
She doesn't know the code that's running on machines inside her body.
I don't even know the code that's running my heart.
And yet, I trust it.
But it doesn't really matter, because you have no control over this anyway. All you have to do is decide whether the salary is high enough; the rest is irrelevant.
Some people really enjoy playing the lottery. If that's you, well, knock yourself out getting into the details of the options package! Just don't kid yourself or make any serious life plans based on your hopes about that stuff.
And this is also the reason why I'm very wary about equity. I'd certainly want to work for a company that's going to do well financially, regardless of how I'm being compensated, but I have only a layman's understanding of business and finance. I'm simply not equipped to judge the financial health of a potential employer and how much its equity is really worth.
Hahahaha, oh man so here's the part where I just dump all my emails about the answers I have gotten back from CFOs over the last decade:
- "common stock shareholders aren't privy to financial details"
- "we don't share that information"
- "I discussed it with the board [consisting of myself] and they decided not to release valuation information"
- "The stock options are just to retain employees!" and other awkward non-sequiturs that are distinctly not a financial statement.
So do ya'll want to unionize or nah, I have a feeling Peter Thiel and Andreesen would totally support it and we already make enough to make union dues negligible.
It's probably worth mentioning that there are a lot of ways to implement convolution with a kernel, and the kernel can be of any size, not just 33. The explanation here shows how to implement the output-side algorithm nonrecursively; http://www.dspguide.com/ch6/3.htm gives this for the one-dimensional case. But you can implement it on the input side instead (iterating over the input samples instead of the output samples), there are kernels that have a much more efficient recursive implementation (including zero-phase kernels using time-reversal), you can implement very large kernels if you can afford to do the convolution in the frequency domain, and there's a whole class of kernels that have efficient sparse filter cascade representations, including large Gaussians.
(To say nothing of convolutions over other rings.)
That is to say, there is more to cellular automata than the GOL, and one bit per cell.
So nice and clear.
I've been working with images for 12 years and I was never sure exactly what 'sharpen' actually did ...
It seems silly that university astronomy departments don't invest in small arrays of these telescopes with automated tracking software deciding where to point each night. With a modest number of sites you could have global coverage of all nearby objects of interest regardless of foul weather in some spots.
But you know, they need more money to expand administration instead.
Other efforts like this include:- American Association of Variable Star Observers https://www.aavso.org/public- Center for Backyard Astrophysics http://cbastro.org/
and I'm sure there are others I don't know about.
Another field that involves lots of work from amateurs is microlensing. In a microlensing event, one star passes in front of another, and its gravity lenses the light from the star behind it. This produces a characteristic increase and decrease in the brightness of the background star over the course of a day or so. If the lensing star (the star in the middle) has a planet, this will distort the brightness curve. These sorts of observations are extremely time-sensitive, and its critical that data is collected during certain, very narrow bands of time. Weather or other observing priorities sometimes prevent professional astronomers from observing these events, so it's not uncommon that data in some of the crucial times are provided by amateurs.
And there are lots of other areas that amateurs have contributed enormously to! Asteroid discovery, monitoring variable stars (https://www.aavso.org/), and even exoplanet discovery! In many ways the term "amateur" does a disservice to amateur astronomers because their setups can be quite sophisticated --- the only thing "amateur" about them is that they don't get paid for all the great work they do!
It always blew me away that the reason we had a project like that to do was because not every star had been looked at.
This was before Kepler data came out and it will not really apply once LSST starts kicking ass.
My childhood needs to quit getting to the front page like this.
Here's one way I can think of to fix it without just taking out the feature or trying to hide its results: redefine "visited" from being "visited ever, from anywhere" to "visited from here". For instance, if I visited a wikipedia article on my own, and it appeared on hacker news, it wouldn't appear as "visited". However, if I clicked on it from hacker news, next time I load hacker news the browser would mark it as visited. I think that would keep all the useful properties of visited, without leaking information.
l_and:visited and l_and_not are set to background color white, the others black.
512 link 'stacks' are created, each one in a td tag, each one for each possible setting of the 9 sites he checks.
Each of the classes get mix-blend-mode: multiply turned on.
He then looks for a final image with color white -- that's the bitset of your visited sites, since the multiplies will all give white as the output.
Like all the other cross domain policy stuff.
The same idea has been done before, except more cleverly with the color property and captchas:
What does this mean?
Side note: Micha is truly prolific, I get the feeling that his dream is to have a full system secure against common attackers, which consumers might actually want to use. He seems to attack every piece of system software in every way from design flaws to implementation bugs.
So, here's how to do it: sample 100-200 locations in a country. In each location, extract a tile of the map and count the circles in there. Then you need to scale the sum by the total surface of the country divided by the total surface of the sampled tiles.
I see a considerable amount of apathy towards serving in a jury. Why is it seen as such an inconvenience? I can understand if it causes financial problems, but I've seen jury dodging at all levels.
When it comes down to it, if you were accused of a crime wouldn't you want people who cared on the jury? If so you must serve on the jury and must do it dutifully.
Is there anything stopping a judge (other than time/resource availability) from doing this unilaterally?
That said, the judges in this article come across as juvenile and thoughtless, being concerned about how boring work is without trials. How about the poor defendant who takes the plea deal even if they feel they are innocent, just because the stakes are too high? We hear a one mention of such a case, but the rest of the focus is on the poor judges and clerks who are bored or not paid enough ("my kids didn't go to camp"!).
There's an argument, but it's a really lousy one. If you're passing NULL as a string to printf, your code is broken. A crash which results in someone tracking down and fixing the bug is far better than quietly doing something which is 100% guaranteed to be wrong.
I strongly disagree with this one, although I'm not sure if my reasoning is broadly or only personally applicable. The last thing I want to do at the end of a long day of work is recap what I did. Even in the best-case scenario of a day full of victories, it's just exhausting to try to relive all of them with the added burden of explaining the decade-plus knowledge base you'd need to understand why Problem X was so hard to solve. I'm much happier with a base of other interests to talk about after work with a non-programmer instead.
Is this really true for everyone? I sacrificed my relationship because I prioritize my projects and I've never been happier. I can't imagine getting into another since it would mean compromises I'm not willing to make.
This had me laughing:(No, changing your Vim colorscheme doesnt count as a different experience.)
I wonder what that fifth of my brain would have been thinking about for the past 10 years, if not programming. Maybe it would have been dancing, or painting, or soccer. Instead of context switching into thinking like a computer, it'd be how to move my body around or how to meld colors together. I feel like that would lead to a much more fulfilling life.
I used to program for fun in middle school. It was probably halfway through high school when I stopped programming for fun. It was always that little nagging voice in the back of my head: Play it safe. Programming is an in-demand field! You're good at it! Look at all of that awesome shit you made.
At this point, the only "hobby" I have is programming. I don't even know what else I like anymore.
It is kind of sick to hear about programmers pulling 60+ hour weeks digging the grave of the fellow next to them because they are not culture-fit or good programmers by their standards.
The establishment knows this, that's why it squeezes out of you every penny it can get, getting you to feel that you are the greatest programmer of all.
But to be in a meaningful serious union and fight for your rights means you have to do a lot of soul searching, throw away your smugness and self-righteousness (about how great developer you are) which is not compatible with the capitalist nonsense we face everyday.
I am sure I will get flagged because a lot of readers come from the "Valley" where money is god but don't really care.
Thanks to Google Books, the good ol' high school research paper standby of "write from bad sources (Wikipedia, these days, or from a single decent source) then find good ones for the stuff you wrote until you hit the minimum reference count" also works for (at least most) undergraduate papers. You just need search and the free samplesfull book and leaving your room not required. Bonus: it's also easier to get generated citations (for the real books, not as web resources) than manually entering the data from physical books.
Someone recently found that Charles Wheatstone wrote about this phenomenon first, after noticing it in lathe-turned flat surfaces. Wheatstone looked at it a bit, then taking inspiration from the obvious 3D images, went on to invent stereo drawings, photography, and the stereopticon.
But, Wheatstone never got it. He never realized that we can make our own scratches, and therefore draw any 3D object. "Steampunk holography" remained lost for about 150 years.
"It is curious, than an effect like this, which must have been seen thousands of times, should never have attracted sufficient attention to have been made the subject of [scientific] observation. It was one of the earliest facts which drew my attention to the subject I am now treating." From Wheatstone, Philosophical Transactions of the Royal Society, 128, (June 1837) "On some remarkable, and hitherto unobserved, Phenomena of Binocular Vision" p371-
The same thing happened again in 1992. Two scientists at Polaroid corp. accidentally made some scratch-patterns producing flat images floating in 3D. They analyzed the scratch geometry. But they missed the secret trick, and never attempted drawing their "holograms" using individual scratches. doi: 10.1364/AO.31.006585
The hologram of a single point of light is a "zone plate": concentric rings. Move a sharp point in concentric circles on a surface and the scratches will form a zone plate which focuses light to a point. An abrading object is just an array of sharp points, so the scratches formed by moving the object in concentric circles will focus light to an array of points, corresponding to the shape of the object: an image of the object. Simple! (but not initially obvious.)
That old code is on GH if anyone wants to fly with it ;) https://github.com/grinich/mikrokopter
Control itself uses a different, "traditional R/C" path (itself ripe for disruption), but there's plenty of possibilities from being hooked up to a Phantom 2's SSID. There are two Linux-based computers on that network: the "guts" and the camera controller. The root password for both is wide knowledge, and you can brick an operating, in-flight Phantom 2 very easily with nothing but your laptop.
Hint, hint for a startup here, since I've been on three threads now where folks are looking for drone denial.
Let's hope at least Boston Globe has a backup of the spreadsheets and other original research documents. I wonder why a former journalist and current research scientist has no proper personal backup strategy.
Fun fact: XML schema declarations are themselves XML documents, and there exists an XSD describing allowable XSDs. https://www.w3.org/2001/XMLSchema.xsd
It seems that this format, while a clever idea, is going to have a lot of problems in describing more complex json documents, which may have optional arguments, arguments that are mutually-exclusive, etc etc.
For example, in the package.json example alone, there are several properties that can be either a string or an object, but it would be hard to expose that in this system.
But maybe this isn't supposed to be documentation, in which case I'm just confused about the purpose of this.
Edit: If more information about the available properties were available at each level (i.e. from the top-level, here are all the possible props; from the "contributors" prop, here are the different possibilities) and this could be used to document one's own project-specific JSON, this could be very cool. But the best would be if it had a way to integrate with a proper schema validator.
> Please check us out on a larger screen. :)
> Email yourself a reminder
WTF? You might not know this, but my phone is equipped with advanced pinch-and-zoom technology you may not have seen before.
Waiting for a good webpack.config.js one :)
Whats significant about this dynamic is the effect it has on how value is distributed along the stack: the market cap of the protocol always grows faster than the combined value of the applications built on top, since the success of the application layer drives further speculation at the protocol layer.
That also goes the other way: failure at the application level can bring the token value down as speculators sell, and a lowering token value affects all applications.
When there is a good or service, you usually find a situation where a centralized monopoly platform - "use our API, use our UI, feed off our database, speak with our sales and CSRs" - is more straightforward to build and more advantageous for the stakeholders who will build and operate such a system. There are only certain niches where there's a definite and ongoing win to build and participate in a decentralized protocol. This rock and hard place has made it hard for the field to break away from purely speculative activity.
While the tech is maturing and growing an impressive set of features, I don't see big breakthroughs coming without a major initiative from a big player.
The web flourished due to its lightweight, comparatively easy-to-implement protocols. Applications could be built cheaply and easily on top of it. VCs and many entrepreneurs profited handsomely. Much of the past 20 years in tech has been pushing this concept to literally every person on earth.
Compare to the 'fat protocol': The protocols are tough to write, and tough to implement. The value is captured at the protocol level, not the app level, so there's little incentive to create inventive apps.
Few apps will be created on such systems, so few successful consumer applications will created with them. Seems like the winner here is in business-facing systems. For USV, it strikes me as peculiar to favor the 'fat protocol' and back other types of businesses (e.g. consumer).
I am no physicist, but these explanations seem really messed up. I don't think this guy got us any closer to understanding ball lightning.
During one of the photographs a huge white flash occurred. I turned around and couldn't see anything. It made no sense because I was up on a mountain, and the silent lightning appeared to come from above and behind me.
There wasn't even a cloud in the sky in any direction and it wasn't my camera flash because it was during a long exposure and I was looking at the camera. The photo I took came out over exposed even though it was the same length of exposure I had been using all night.
Don't think I will ever know what it was, but I've always suspected that it's probably something in similar to nature to what the article talks about.
I saw silvery-white ball lightning from my 7th story window a few years ago. It swooped, turned sharply and touched down maybe 200 meters away. It was a very bright, constant light against a cloudless blue sky in the middle of the day, moving at a constant speed.
The best explanation I have for it is decay products of cosmic rays, resulting in plasma soliton. My estimate of the wattage and energy of its light output would mean it wasn't from a source in our solar system.
I would think that if the source wasn't our sun then events like these would be very easily seen at night, and that my Northern latitude's long dark winters would de-bias opportunities to observe.Meanwhile if the source was our sun, then I would assume observations would correlate with sunspot activity.
(I am not sure that this would be the same phenomenon, but it's definitely microwave plasma).
The most memorable incident was when it dropped out of the sky directly over a little league baseball game I was playing in. Several dozen people saw that one. It hovered, sizzled, zipped off, and disappeared. They called the game on account of weather.
PS: The video in the article is unconvincing and probably a firefly.
It seems not well known and somewhat controversial, but offers generous rates higher than everywhere else. For example, 4.8% from Expedia, 5.6% from Orbitz. Unlike every other cash back program they do not cap the amount for flights.
JetCash is effectively real cash because many items are available cheaper than even from Walmart, Amazon, or Costco. Presumably Jet was taking a loss on those transactions in the short term. It really has been my favorite cash back program ever.
And to be clear, this isn't a cynical musing. I'm genuinely curious to see how this worked out for them.
Amazon is built through and through as a tech company. The people running it all have tech backgrounds so they're all able to get behind the Bezos vision of automation and scale trumping all.
To give you an idea of how savvy the management of Amazon are, I heard rumours that Diego Piacentini (Bezos' lieutenant managing the retail business) was known to roll out his own SQL queries. My own head of the team, who was a more conventional "retail guy", barely knew how to use excel and had chiefly gotten to where he was by politics and tenureship, despite being younger than Diego. Whether it is true or not, it gives an idea of the mindset and skillset valued at the top of Amazon.
Now I'm almost certainly biased given that I've worked for Amazon and not for Walmart, but I'd be willing to bet that Walmart has more of the latter "old school" types than CS folks looking to solve new fields. And to me, that says that the odds are tipped against this integration being successful as catching up with a tech company would require an overhaul of Walmarts entire infrastructure and culture. Not to mention that Walmart are at the mercy of their shareholders, whilst Bezos is still majority shareholder.
I must have been very wrong. I don't see how Jet justifies 3B from Walmart.
I will credit them on one point - They do seem have great employee sat that is a result of investing lots of time and energy to create a healthy environment.
Our very own Marissa Mayer.We also have Kevin Systrom on teh WM board.
I wonder if Walmart's more physical presence can help with this?
Damn shame, I was hoping to maybe apply there at some point.
Am I the only person who would rather pay more than have to order anything from walmart.com?
Aside from serious ethical issues, my issue with Walmart is that manufacturers produce lower-quality brand-name items just for Walmart. Its hard to know what you are actually purchasing.
- better shortcuts in the web interface - the mobile web interface is actually good - can import email by IMAP - POP links actually work, Gmail's POP links are broken - IMAP is better implemented - Gmail limits IMAP to 15 max connections and each folder ends up being a connection - CardDAV works and has good picture resolution, when I was on Google Apps they were limited to 80px - FastMail's Sieve filters are very flexible - on folders vs tags, I like folders more, because then I can import my huge work email as a backup without polluting my searches and my archive - Google Apps email aliases limited to 30 per user, which is pretty dumb and insufficient if you have a couple of domains - FastMail does sub-domain email aliasing, which is awesome, as now each user account I have has its own email; Gmail only does "plus" aliasing, but that's obvious and problematic
On the matter of privacy, Google is simply too big and has access to too much info.They have your searches, often representing your secret desires, your video/music preferences, your favorite locations and habits, your travel itinerary, your voice, your chats, your G+ likes, your email, your purchases, etc.
And don't get me wrong, personally I've never seen many big companies as competent and as non-evil as Google. I also worked with their AdX and I can tell you that from the advertiser's perspective, Google discloses much less information than others in the business. But they don't have to be evil right now, they simply have to store that info and analyze it later, sell it, etc. And consider that the info in question is enough to determine with accuracy if somebody is pregnant, male or female, black or gay, as in things that in the right context can get one injured or killed.
In other words you can use Google's stuff, but reducing their area of knowledge and not placing all your eggs in the same basket is always wise.
And if you love your Gmail interface and all the goodies that come with it, thats fine. Get a 1 user Google Apps account ($5/month) and start using your own domain with it. That way you have the freedom to switch to a other provider at any time once you are ready.
It's a small thing, takes me all of 10 seconds, but I do notice it, every morning.
My domain got blacklisted once. I contacted the service concerned (i.e. the people running the blacklist) and they said my web domain had appeared in the footer of a spam email.
"So, do you have any evidence I put it there, or paid someone to put it there?"
"So you'll blacklist random domains a spammer puts in their email? Because that's what happened here."
I was surprised (and still am) that this kind of service could be so naive. My domain was literally just a bare http://domain.com/ in the footer, no link or advertising associated with it at all. Domain blacklist successfully polluted, as far as the spammer was concerned.
I would like to add one minus though. Any good old smiley like ":)" in emails gets replaced by a yellow smiley face icon. I hate to see yellow smiley faces where someone wrote colon end parenthesis. It's all done client-side though, so noone else has to see it. Have been in contact with FM tech support and they seem to be uninterested in adding a checkbox to turn this nuisance off. Otherwise an excellent, excellent service.
15 GB is free over at Google. Does that mean my data is really worth $40 a year to them. I do realize this is oversimplifying things...
One option would be to "self host" at Digital Ocean. For the same $120 I would get 30 GB storage and I could use the VPS for some other things. But even DO themselves try to dissuade you from doing that (on reasonable grounds I believe).
That said, I'm not sure why more people don't consider upgrading to Google Apps from free GMail. $50 a year gets you an SLA, support, and no ads. It's been extremely reliable for me and I've not had any downtime (that I've noticed) for 5+ years. No performance problems either that I hear folks complain about with free GMail either.
Migrating away from GMail for privacy reasons and he still ends up with Google for functionality...
They lack two big (features/caveats?) as of now.
(1) searching for a text within the body of the email is not available (They can't read my email kinda thing.) and
(2) Inline images don't work - pretty bad flaw.
I do like :
(1) Simple and Fast UI for web app, and iOS App.
(2) Knowing that I'm supporting folks that care about privacy and freedom. They do open source some of their stuff and are now the maintainers of openpgp.
Haven't looked back since and it just keeps getting better.
These guys are the core contributors to so many fantastic open source products, they're transparent, respect your privacy and security above all else and it's resulted in an excellent all-round email service.
Austrlia is spying on your email on Fastmail the same way NSA is reading your gmail.
Totally agreed. I'm a Fastmail's happy user, glad to pay for such a great service.
Pros: - 100% green energy - 100% Free Software - Servers run on fully open POWER8 architecture! - Server for your contacts (CalDAV), calendar (CalDav), and notes (IMAP) - Swiss privacy laws - They run what seem like very fancy business-class LUG events in Europe. Of no utility to me what-so-ever, but I'm glad to be indirectly funding this sort of thing. Cons: - No 2FA :( - Not the cheapest (but I'm happy to pay a little extra for the above) - Slow webmail (moved back to native clients)
I outlined the factors in this decision in https://gist.github.com/tomfitzhenry/d73fef19752cbf6ccdda3eb... .
I'm actually using mu4e for exactly this reason: It's so much faster than any web client could ever be. And I'm saying this as a professional web dev^^ And yeah, I know GMail - I was an early adopter and have seen two companies migrate to it in the last fiveyears.
Of course, running mail within Emacs has its additional awesome benefits, but that's a different kind of argument I'll leave out for now. I'm honestly curious why people think/believe/know that GMail is faster than a well engineered locally indexed app. It just doesn't seem to be the case for me, but I hear this time and time again.
If you're a big Google Drive user, you'll most likely miss Gmail's built in integration with Drive, but FastMail has a simple file storage feature, where you can save attachments to your allocated space, and attach files from your files.
Another advantage for using FastMail, is that they do Email as their primary business, so it seems.
Is it possible to have my email usage through FastMail but keep my email address to log in to google so I can still access all my docs?
Or do I need to create a gmail address and move my docs over, then move my email over?
At some point I asked them whether they could "emulate" Gmail's UI, so that these apps and Chrome extensions could run on Fastmail. But understandably this is quite a big task. If they could pull it off, it would be quite phenomenal though..
Am I alone in finding web-based email too slow for day to day use? The responsiveness of a local MUA w/ or w/o a fast index (notmuch, etc.), once you're used to it, is hard to live without, at least for me. I find it messes with my workflow if I click on an email or folder and have to wait for the browser to return and render the XHR result. Or did Gmail just become slower and slower with time? I haven't tried FastMail yet.
That said, once I figured it out the service seems solid.
With the amount of upvotes for this article, is it safe to assume people like (and trust!) FastMail? I didnt used to care about privacy, but I have been much more interested in it lately so I would like to switch.
Why is this always a bad thing? Is it just an innate feeling against having your information "used"? Personalization is an ever more important and much wanted feature in everything else in life so why should ads just be generic and irrelevant?
Did you delete the Gmail account?
Silly over reaction. Your going to use a service that is not as good, waste a bunch of on importing/exporting for reasons that would have made no difference to your life.
So your actively choosing to downgrade your life to spite someone else. Smart move.
I don't think you can move your data out of Google right? They will keep it even if it looks like deleted to you.
Google (Alphabet now right?) has changed their TOS so many times can someone actually educate me on how long they keep my deleted emails and then if they truly ever delete those, or there is some 160TB compressed tape archived in their basements so that if they truly want to, they can open it and read my emails from today in year 2056 ??
Perhaps their thinking is to get the asset under lease while the price is still at it's current rate and then spend on expansion later if it's necessary (at theoretically lower prices based on the supply trend?).
The only reason for me not to use Twitter right now is all of the ridiculous, desperate nonsense they are doing to maintain their outsized workforce. If they would make timelines linear again, and focus on improving performance, their audience could be maintained and their ad revenues could be retained; they'd also have room for improving targeting.
Somebody should set up OpenTweet as a budget competitor.
It'll ship late and be probably 2x that cost at which point you should just get a chromebook. Or, ya know, use the laptop you already have.
If I recall correctly Ubuntu wanted to try that, as did Microsoft. Too early to market perhaps.
Crazy that Ubuntu needed $32m, while this kickstarter has only asked $50k.
Everspace on the other hand looks quite promising: https://www.youtube.com/watch?v=7HvpLe-2ijk
Even so, bravo to the team for the extra effort to make sure the experience is as good as it can be.
(Oh, and atmospheric refraction? Surely he means improved mie/raleigh atmospheric light scattering)
When I started years ago there was LILO. It had quirks and was a pain to configure. Then grub came along and it had some great new features -- edit the easy-to-understand config file, and changes would automatically take effect. Easy background images. Easy menu building in the config file. Grub understood enough of the filesystem to avoid the LILO cruft.
Now there seems to be about 5 layers of indirection, autodetection, probing, and shell scripting hacks to generate scripts that will eventually spit out a mess of obfuscated grub config files, and god forbid you touch any of this mess because any changes you make will be overwritten (a) next boot, (b) next time the scripts decide to update the config, (c) next system update, or (d) just whenever. All of this dynamic stuff to configure something that, for many systems, needs a config change about zero times per year.
> .Did that really just take as long as I thought? This machine has a gigabit connection to the Internet. I must be imagining something.
> .Okay, I wasnt imagining something. It must have downloaded a TON of source!
root@tessier:/usr/src/ubuntu# du -sh 19M.
Having a gigabit internet connection means you can download any content from any server in the world at 1Gb/s, right? /s
The tone of the post makes it pretty hard to take that rant seriously.
The reason that addiction treatment programs find it so hard to treat 'addicts' is that they're dealing with the small subset of people who, for one reason or another, can't quit on their own. It's sort of the opposite of survivorship bias.
> "How cool will it be if we can collect data on 9- and 10-year-olds that will help predict how all young people will function in later life?" asks Garavan. "This is the sort of information that will truly help people parent, and legislate, and educate, and live healthy lives."
I'm not as enthusiastic about that future as Garavan is, and especially not the idea that it could shape legislation in any kind of significant way. Doubly so if it's going to be done via statistical models and correlation, rather than a deeper understanding of the mechanisms involved.
I'm not sure I will ever understand addiction. But I know that when I accepted personal responsibility for the things that I did to myself and others in order to get drunk/high, life became a least a little easier. It's not perfect, but to use AA terminology it's not "unmanageable" anymore, at least not as much as it once was. And I am grateful to be where I am today. I guess, for me, my compulsion to drink and drug was mainly caused by my surroundings and upbringing.
Although I don't go to AA or NA anymore, if it is working for some, I hope they see it through. Whatever is getting you through to the next day as an alcoholic/addict, my advice is to keep doing it. Whether it is spiritual cleansing or acceptance or doses of naltrexone/buprenorphine.
ultimately I should be able to replicate your findings and you should be able to replicate my findings, otherwise its not science, its bullshit."
I absolutely, 100% support searching for alternative methods to help people who are drowning and suffering in addiction, but if a method doesn't work, there's no point in following it.
Underlying emotional pain is what causes people to literally self medicate.
This is not to say that substance abuse doesn't create actual physiological necessities to then continually consume the substance. It does- i.g. body stops producing it's own opioids when it starts to get them from the outside.
When someone begins using, then abusing, they are trying to relieve an underlying issue. The action is a symptom of a deeper problem expressing itself (the actual dis-ease). These issues are usually severely repressed and suppressed, so much so that one cannot articulate them-- only medicate them.
Digging out and resolving the root emotional issues goes a long way to curing someone. Afterwards, physiological damage still needs to be addressed.
If addiction is a disease why wait for an addict to hit rock bottom?
Would you wait for a diabetic/heart disease patient to hit rock bottom before treatment?
Note: below is a link from reddit about their opinions from the video (maybe or maybe not they are actually a neuroscientist \_()_/ )
Not found!? Well, I'm just the motile conglomeration of vermin for the job! "Rat Park", a utopia for rats, is (IMO) a critical component of the pushback against the disease model of addiction. An illustrated summary can be found in this lovely comic , while more interested parties can peruse the works of Dr Bruce Alexander, in the form of a blog-retrospective , or a slightly more technical presentation to the Canadian Senate , or his excellent book, The Globalization of Addiction: A Study of Poverty of the Spirit .
A layman-readable synthesis goes something like this: folk wisdom holds that some drugs are insanely addictive, as an inherent property of the drug. But this has generally  been borne out by introspective interviews of inveterate drug abusers and highly technical experimentation on lab animals (mostly social animals, like primates and rats). Introspection, for most domains of psychological study, is never used and treated as hearsay (it's not a contentious claim in psychology that people aren't good at examining their own mental states in a scientifically useful way). The methods of the experiments on social animals like rats and primates often dictate cramped, isolated and impoverished conditions. The "Rat Park" study took normal lab rats and placed them in an enormous array filled with nesting-boxes, large open areas, wheels and toys. The control rats, by contrast, were treated in every way identically to a standard addiction study -- tiny cage, no comforts, no stimulation but for a human changing their litter tray. Various experiments ensued, but even with different experimental treatments, the researchers couldn't induce addiction in the rats of Rat Park, while even fiercely bitter, heavily-diluted morphine-water was the choice of the isolated, imprisoned rat.
The orthodoxy of invincible permanent addiction was vanquished in these animals (and the minds of the researchers), by providing a more stimulating, spatious environment filled with fellow rats with whom to interact. This might say something important about the nature of addiction in other social animals, like humans.
 It IS true that you can see pretty drastic rewiring in the ventral tegmental area (which covers, generally, reward systems, drug addiction, strong emotions) with things like nicotine and cocaine, so the picture is probably more complicated than 'it's purely a social problem', but the ascendancy of the dogma of the disease model of addiction functionally prevents further study. Note that the Rat Park paper was very difficult to publish and received no attention for years.
Addictions are not simple like Pellagra or Scurvy, but when you figure out what a specific person actually needs, they can rapidly recover. I've done it twice - my alcoholic friend is doing quite well; my poly-addict (opiates/cocaine/alcohol) would be doing quite well if she hadn't been captured and court-ordered to endure palliative Psychiatric treatment.
The Alcoholic became an alcoholic when she discovered that Vodka helped her anxiety more than Xanax. Benzodiazepienes lose effectiveness after about 4 weeks - when a person who is addicted to this class of drugs tries to quit, their anxiety is worse than it was before. She was peri-menopausal at the time, which was probably a huge factor...
As for the poly-addict.... When I met her, I said to myself, "this woman is 'high as a kite'..." As the months went by, she gradually invited me into her world, and I learned that she really was self-medicating with the street pharmacy.
She latched on to me like a life preserver. After almost six months of my influence, and a little non-quantifiable hocus-pocus, she called to share three insights, spread over 3 days:
- "I wish I wasn't a drug addict..." - "I should only use substances which are legal!!! *Alcohol is legal...* " - "I hate methadone, I hate everything about it."
So after six months, she was doing rather well. But alcohol is hard to kick on your own, and she was taken to the hospital as "psychotic"... The psychiatrists got hold of her - it's been a disaster. . They pretend that the symptom of withdrawal from substances ("psychosis") justifies the use of so-called anti-psychotics in perpetuity. Robert Whitaker has looked at the evidence, and has concluded that there is no benefit to the routine use of these drugs ...
The mental health field needs a clean-sheet redesign. Nothing else will help.
(edits: formatting, clarity )
In fact its very common in this kind of discussion to see an argument like "If you believe <positive statement> then you must support <morally abhorrent policy>". But mostly the morally abhorrent policy doesn't actually follow from the positive statement, and if the person making this argument were forced to accept the positive statement, they would not in fact support the policy. In this debate
<positive statement> = addiction is not a disease <morally abhorrent policy> = people with addiction don't deserve any sympathy or help