It's tempting to throw away the old thing and write a brand new bright shiny thing with a new API and a new data models and generally NEW ALL THE THINGS!, but that is a high-risk approach that is usually without correspondingly high payoffs. The closer you can get to drop-in replacement, the happier you will be. You can then separate the risks of deployment vs. the new shiny features/bug fixes you want to deploy, and since risks tend to multiply rather than add, anything you can do to cut risks into two halves is still almost always a big win even if the "total risk" is still in some sense the same.
Took me a lot of years to learn this. (Currently paying for the fact that I just sorta failed to do a correct drop-in replacement because I was drop-in replacing a system with no test coverage, official semantics, or even necessarily agreement by all consumers what it was and how it works, let alone how it should work.)
The hardest risk to mitigate is that users just won't like your new thing. But taking bugs and performance bottlenecks out of the picture ahead of time certainly ups your chances.
I could see this begin ok in most cases where speed is not a concern, but I wonder what we can do if we do care about speed?
It's a good thing nobody contributes to my github repos since noone had the chance to run into the issue...
On average, I get much more satisfaction from removing code than I do from adding new code. Admittedly, on occasion I'm very satisfied with new code, but on average, it's the removing that wins my heart.
I'm curious though if there are any strategies folks use for experiments that do have side effects like updating a database or modifying files on disk.
Any change Github is at anytime going to show the specific merge-conflicts for a PR that cannot be merged?
Let's do this right now.
--> What term can we use instead of "technical debt" that is financially correct and also captures the emotional and analogy part of it?
(this is at -2, 8 minutes after I submitted it - boy some people really hate my question! can these people kindly think my objection through :-D. thanks.)
Right now, there is a massive cookie consent form blocking my view of the actual article.
"Are you suggesting that the oscillating ice ages are caused by a slowing and speeding up at the earth's core?"
Still interesting though.
The original NES console was only designed to output images that were 256 wide by 240 high; meaning that the final image that needed to be displayed to the screen was 180kb in size.
The NES definitely didn't have 24-bit colour, so the final image data was at most 60kb, assuming 256 colours, or 30kb assuming 16 colours and a palette.
I don't know for sure what colour settings the NES had, I doubt it had a freely selectable 256 colours for each pixel. Probably a limited palette, maybe per-sprite, maybe for the whole screen.
The state of the image (jpeg artifacts), was a dead giveaway that the comparison is worthless.
There are offline OpenStreetMap clients, but there aren't ways to update said map offilne, or to create "mini-OSM" that later can sync with the main one.
For instance, if you're doing a survey in Amazon with a local community, you would need to make a survey, go back somewhere with internet, sync the data with OSM, download the new file and go back to the local community.
The innovation those guys are making is to create a mini-OSM, so the village could have its own mini-OSM, and later that mini-OSM could be synced to the main one.
They are not the only ones trying to do that, an NGO called Digital Democracy is also trying (https://digital-democracy.org)
The trend goes the complete opposite direction: the devices get faster but we only use it to draw the data from our servers faster. We push all our data into the cloud, although our devices share a private network most of the time.
Over the long term, widespread access to offline maps feels like a critical plan B. I also suspect that we're just at the beginning of a map industry not in the mature commodity phase.
Obviously an editor like spatialdev are developing would need more advanced features thus they are targeting android.
There are already tons of applications for offline OSM usage. There are even (sadly) many different vector formats and files available for download. It would be nice to focus improvement of those instead of developing yet another competing standard. I want to be able to use multiple apps with the same data, not having to provide each app with its own format (looking at you, OSMAnd, Oruxmaps, maps.me...).
2. The 'crank' here is not part of CSS. Computer languages that are declared 'Turing complete' need to be able to crank themselves. You need to be able to tell them to go, and wait. I accept that magic the gathering Turing machine, (at least as long as you remove the word 'may'), because it's part of the MtG rule set that you continue performing all the state transitions until you reach a halt.
3. Allowing this completely external pump means that anything that can add and multiply three numbers and then exit would be counted as Turing complete, because you can then instantiate an infinite number of these and pump data through. The Turing complete nature of that construction lies mostly in that pump. It is not at all just a crank that say 'go'.
And 3 is really the important part here. None of the scary implications of 'Turing complete' come into play, because you can't take the result of one arithmetic statement and feed it into more. All of that playing around is roughly O(n) in terms of page size. Not O(unlimited) as 'Turing complete' might imply.
1) it's not surprising: a lot of very simple systems can compute. It seems to naturally happen as you add flexibility to your rules. See for instance Wang Tiles or some cellular automata.
2) it's not a good thing: it means that the behavior of CSS3 is undecidable in general, which makes it much harder to build tools that can meaningfully analyze it.
"Assume that the amount of HTML currently loaded is finite but sufficient for all of the state to be properly rendered."
So consider<html> <div class="myContainer"> <div>1</div> <div>2</div> <div>3</div> . . .
I've never met an HTML streaming solution that won't just stream this sequentially, which means no </div> for myContainer will ever be emitted (and therefore the HTML will never be well-formed, and the CSS will never have sufficient information to lay out).
If the HTML were streamed such that <html><div class="myContainer"></div></html> were received, and then the interior of myContainer were streamed, that'd be a different story, but that doesn't exist. So I don't think this blog post's argument that CSS3 is Turing complete works for any real implementation.
I'm actually a little concerned about the concept of "human trafficking" since the stats being calculated tend to omit realities.
One big reality people ignore? People who pay and attempt to cross the border from Mexico to US? Those are counted as people being human trafficked.
>99% the immigrants that arrived illegally? Human trafficked, paper after paper when you read into them. They even count cases where they came alone, but had monetary help from a friend or family member.
So it's ironic. About my thesis - it points out the contradiction of people who push for the rights of illegal immigrants who propose fighting human trafficking, are really advocating the opposite position.
By extension, not mentioned in paper, I wanted to show for the longest time how people abuse "human trafficking" out of sheer political opportunism.
I'll gladly post the paper on my blog when it's checked by our department.
Imagine that shouting has no actual effect on performance, but it is traditional to shout at underlings when they do something particularly poorly. When your trainees screw up, you berate them - and afterwards they actually do tend to do better. Unfortunately, this is because the screwup is more often than not a random variation, and the improvement is due to the mean regression, not the treatment. Conversely, praising them when they do well (again, assuming no underlying effect) actually seems to worsen their performance.
90%+ of published outcomes can be invalidated by simply looking at the published data. If you want some chuckles, read blog posts by Al Lewis ripping on research publised by companies touting their own performance. He's acerbic and condescending but also, mostly, correct.
Then the industry will mature, and this will be used with greater subtlety.
a blend of hand made compositing and simple warping. Technology is catching up too fast.
In any place that needs air conditioning, solar power is very effective. Peak load and peak solar panel output line up nicely, and little storage is needed. Keep it simple.
The German Wikipedia is published and maintained by a US entity, there's a local German Wikimedia chapter but it's not the publisher of the German Wikipedia. The article says that they sued the US entity (the Wikimedia Foundation Inc., not Wikimedia Deutschland), but doesn't explain this issue.
The Multi-Configuration Project abstraction (IE build matrices) is clunky and the plugin ecosystem doesn't respect it well (eg. the Gerrit plugin is extremely popular but very brittle here). So you wind up with O(n) projects anyway and still needing to copy and paste configuration among them.
Also Jenkins configuration itself is pretty nuts - settings splattered all over the web UI, backed by XML - compared to the simplicity of modern tools like Travis (which uses YAML).
And Jenkins' UI I would definitely categorize as typically-poor open source UI, having evolved and grown more complex over many years with no strong guiding vision.
So I prefer to have clear segregation. Jenkins as a build tool only. Rundeck for deployment. If I had a big need for scheduling, I'd want a dedicated system for doing that too.
1) Simple UI (for simple usecases)2) Easy setup on single node or multi-node scenarios3)Automation capabilities
Unfortunately there are some gaps that make it just enough of a pain to really take up in a Production 24x7 env.
1) In a distributed setup, there are very minimal node management capabilities unless you manually integrate with say Zookeeper or something.
2) the plugins for backing up and restoring configurations are "lacking" to put it politely
3) Very hard to change the Master machine in a master slave set-up
So while Jenkins is a like the Swiss army knife for CI be careful that you don't take it to a (multi-node production) Gunfight - to stretch the analogy.
I'm almost to the point of using it just to manage generic "do X sort of stuff" tasks across many nodes, but the jobs that rely on an external system to run parameterized builds, and then the job then stores the results into the external system.
Reminds me of this post by Ted dzubia where he uses makefiles for data processing .
I like reading about novel uses of tools other than their original intent.
In particular I like the flexibility around snapshot versus artifact dependencies, the APIs are decent (and you can do a lot of troublingly clever things if you invoke the API from within a build), and the metarunner concept seems strictly more powerful than the Jenkins equivalents, albeit with a somewhat steeper learning curve.
Jenkins is very powerful, but I would not trust it (or any of the myriad plugins we have installed) to not have security holes.
Many designers think "there's no need to allow zooming", but this is often coming from people who are blessed with youth and/or great eye-sight. A lot of people (especially as we get older) need to be able to zoom in to read things, or we just want to zoom in on images to be able to see more detail (especially graphics that have text in them).
Fortunately, you don't need to set "user-scalable=no" in order to reap the benefits of the "no tap delay" (thank you to the webkit team for hearing people's feedback about this and changing course from their original plan which was to only disable tap delay when page wasn't scalable).
You model the price per unit as the sum of different curves.
Complexity not only increases on Software, but if you design a thermal engine, or a plane, or a car.
Working making something as simple as fiberglass, we had something like 100 components, like tensioactives. Most of them we had no idea what they were for, as they were added like decades ago by someone who new.
Nobody wanted to remove a given component and be responsible for the fiber breaking and stopping the line, incurring on tens of thousands of dollars in penalties, so new complexity was added, but not removed.
In my experience, software is the thing in which YOU CAN get the most economies of scale possible, because you do not depend of the physics of the world. But you need to control complexity as you develop.
In the real world, you create a box because it is the only way of doing something, and the box automatically encloses everything that is inside. You can't see inside, nor want to. It is a black box that abstracts your problems away.
In software you have to create the boxes. Most people don't do it, with nefarious consequences.
(Per month:) UsersTotal Per user ----- ------- -------- 1$10$10 5$10$2 10$10$1 15$75$5 25$150$6 50$300$6 100$450$5 500$750$2 2000$1500$1
k/q is small but a very useful interpreter.
There are so many examples, but it appears that to "the market" the most valued software development is large scale.
The sentiment is create and contribute to large projects or go home. Stupid, but true.
"Do one thing well" is more than just a UNIX philosophy. It is an essential truth. Most programs are lucky if they can do one thing "well". How many so-called "engineers" are afraid to write small, trivial programs lest they be laughed at?
Large programs often become liabilities. Can we say the same for small programs? If it happens, write a new one.
Maybe a user with an unmet need would rather have a program that does the one thing they want as opposed to one program that can allegedly do everything... whereby they are granted their wish through addition of "features". More internal complexity. And majority of users only using a fraction of the program's feature set. Waste.
To extend the metaphor to milk, what if the milk industry had to invent the glass industry in order to make the bottles which it comes delivered in? Consumers would have cows not refrigerators.
The dis-economies-of-scale-software are programs where normal glass simply can't be used to hold the milk. A whole new custom type of glass has to be developed. And this usually for a type of milk only like 1,000 people even drink it.
Complexity is something completely different and is well known in all products. I can design a calculator that adds numbers very easily. A calculator that does fractions is much harder to design and costs more. A car with a more complicated engine is much harder to build than a simple engine. This has nothing to do with the actual economies of scale of the calculator or car or you could say that cars have dis-economies of scale too - and obviously they don't. They're the poster child for economies of scale.
Building a truck that is 10km long is worse than building 100 trucks that are each 100m long, but this has nothing to do with 'diseconomies of scale' inherent in trucks.
Why, I think I've heard that before...
"Do One Thing and Do It Well" from https://en.wikipedia.org/wiki/Unix_philosophy
However, managing systems of small software also incurs complexity, the smaller the software components the harder you have to work to make them play together.
It's often not clear a priori whether it's worth to pay a lot more up front to get a monolithic solution or to try and glue together many simple tools.
The Netherlands? We only speak Dutch here. :-)
I guess the author means Belgium, where they speak (at least) two languages: Vlaams and French.
Software exhibits this same economy of scale in production. Take Google's machine learning platform. They allow multiple functional teams to churn out roughly-equivalently-complex machine learning-powered widgets in less and less time. Contrast that with a startup building a single machine learning-powered widget and the marginal cost to Google is significantly lower.
Pretty much any strategy to improve making software at scale, whether code organization or organizational design, is finding ways to limit the complexity of the graph to a constant multiplier of the number of nodes, and keeping that constant small, rather than allowing things to grow quadratically.
One of the main effects of protectionist and interventionist policies has been related to them. A domestic firm starts to rot, unemployment prospects are rising and a sense of national preservation starts to set in. Thus, in the short term, tariffs are levied, subsidies are made and some macro notion of "stability" or "optimality" is reached. The long term costs are the artificial delaying of the onsets of diseconomies of scale with state and business expansion leading to symbiotic interests. Then people complain about Big Business fucking them over.
(The fact that the author quote Keynes makes this all the more ironic. Keynes-the-man wasn't objectionable, but the neoclassical synthesis/"pop Keynesianism" of his disciples Paul Samuelson and John Hicks did influence government policy in a negative way, as noted in James M. Buchanan's Democracy in Deficit.)
Supply and Demand says the opposite. The supply curve slopes upward, meaning that a higher per-unit price is required when the aggregate supply is higher.
Economies of scale apply in some situations, but people generally place way too much weight on them.
Poor performance on military projects is often an issue of huge development costs spread out over a tiny number of units.
Apple spends as much to develop an iPhone as it costs to develop a new weapon system, except they sell millions of the phones so the unit cost works out ok.
What people do think is that the marginal cost of reproducing software is basically zero, regardless of size. This means that choosing between two products, if product 1 has n amount of features, and product 2 has those same exact n features plus an additional feature, all consumers will rationally choose product 2 (lots of assumptions, i know).
This is why companies try to get bigger because if they can offer more features, than all the consumers will choose them and they get all the sales. One could argue that this is the reason why the "power law" effect thats been talked about on HN recently happens.
The point of software is to deliver value to the business. There's overhead with supporting and integrating each system -- to borrow an analogy from the article, each milk carton needs cardboard, a date stamp, etc. Even if software development productivity drops 75% and delivery cost increases, having one big carton of milk may be more cost effective than supporting 50 smaller, more nimble cartons.
If you want evidence that this exists, consider that SAP and PeopleSoft exist and are thriving businesses. Or that the general ledger of most big financial institutions are running on mainframes with code that's been in production for 30 or more years.
For example, because of context switching: when a developer makes one change it can be pretty easy for them to add another change (everything is already "open").
Other comments here mention distribution and combining small simple tools for something larger.
The pop music industry seems to fit the bill.
There some effort to portray this as unusual compared to other industries through a direct comparison to retail costs of larger grocery goods and manufacturing economies of scale, but that's somewhat missing the point. Product development and engineering probably faces similar diseconomies in non-software domains (the same complexity issues and human factors issues that effect software development are present) and, OTOH, actually delivering units of identical software (or services provided via software in the SaaS world) have similar (perhaps more extreme in some cases) economies of scale as are seen in many areas of manufacturing, as the marginal costs are low and more units means that the fixed costs divided by units sold goes down.
Software is not like milk. That analogy is facile and stupid.
Software should be more like civil engineering, where it's normal to unleash a big team on a big infrastructure project and still have some hope that costs and deadlines stay under control. Or maybe like movie making where there's a cast of thousands, the time is huge, and the costs are epic, but some projects stay under control - while others don't.
It's maybe more interesting to wonder what's different about software than to look for enlightenment on supermarket shelves. Because the problems stated - multiple communication channels, mistakes in modelling and testing - are handled just fine in other industries.
The crippling issues are that you can't model software, and there's not much of a culture of formal specification.
So you can't test software until you build it, requirements may change iteratively, the latest technical "solutions" often turn out to be short-lived fads, and you're always balancing between Shiny New Thing and Tarpit of Technical Debt. That's why it's hard to build. You have to build your cathedral to see if it stays up when it rains. You can't simulate it first. And even if it stays up it may be the wrong shape, or in the wrong place.
It doesn't help that management often sees software as a cost centre instead of an engine room, and doesn't want to pay a realistic rate for quality, maintainability, and good internal documentation.
Having too many people on a project is not the problem. The problem is more usually having no idea what you're doing, why you're doing it, or how you want it done - but believing that you can throw Agile or Six Sigma (etc) at it to make it work anyway, because Management Theory.
I quote the explanation of step 4:
If the candidate tour is worse than the existing tour, still maybe accept it, according to some probability. The probability of accepting an inferior tour is a function of how much longer the candidate is compared to the current tour, and the temperature of the annealing process. A higher temperature makes you more likely to accept an inferior tour
It's really hard to find something around this.
Another good resource is http://wiki.bash-hackers.org/.
Bash scripting and its array of tools is a poorly designed language. Writing a non-trivial program, even for an experienced developer, is a painful process. The syntax is uneven, hard to read, and easy to get horribly wrong. I would say mastering Bash has diminishing returns past the intermediary. Any time you need to write a non-trivial program, you will save time and life expectancy from stress management by using ANY other language, even Perl or C. Writing complex shell-modifying code in my .bashrc has been one of the more tedious and non-rewarding parts of my life.
In case anyone here is interested in more reading material, I recently wrote a small book about Bash that could be helpful: https://gumroad.com/l/datascience
To make sure it didn't read like a manual, each chapter is an "adventure", where I show how to use only command line tools to answer questions such as: What's the average tip of a NYC cab driver? Is there a correlation between a country's GDP and life expectancy? etc
I for one am enjoying reading through the informative guide.
Nice job on the author for deploying Prose.io for community editing of the guide.
There is very much a bad way of doing bash. When I first started doing bash scripts, most of them looked like this:
cat file | grep string
cat file | wc
cat file | while read line
Multiple problems there. Then there was my initial attempts at finding files in a directory:
for i in `ls *`
This is when I learned about globbing.
There is enough variance in how things can be done in Bash with varying degrees of effectiveness that Google even has a Shell Style Guide:https://google.github.io/styleguide/shell.xml
But I always think Flint is prime for opportunity. The people need basic essentials, water, food, shelter. But the infrastructure to build factories is there. Power, train lines, the whole deal. It's really a shame. The sad part is, the people are still hell bent on supporting the companies that destroyed the town. Michigan in general is like this, its why they don't allow Tesla vehicle sales.
Growing up my family owned a junkyard and the Flint river ran behind it. It was disgusting. Some of the guys would wade through it on their way two and from work. It was a shortcut, but you had to be a true animal to go that route.
It is true that different water supplies will have different levels of contaminants (lead, arsenic, etc) but can all be within EPA limits. Switching to a water supply with a higher level of contamination will increase exposure. The medical study seems to look at the percentage of children below 5g/dL before and after the switch. It goes from 2% to 4%. So with the old water supply, a certain percentage of children were already being exposed to elevated levels of lead. Switching to a water source with higher lead levels will push more children who are being exposed to lead through other sources to above the 5g/dL mark. However, this would seem to indicate that the primary source of lead for these children above 5g/dL is something other than the water.
At some point it should become necessary to recognize and acknowledge that self-government has failed and must end. I'd suggest some form of a city death penalty - declare the city dead and give the locals a one-time offer of relocation assistance to an approved list of better places. The city government, and anyone who remains, are officially on their own.
We've known Flint (and many similar cities) are doomed for decades. Why do we keep them alive as zombies rather than just help the humans and let the municipalities die?
And yet they never took care of their water supply? The one state with so much fresh water has little regulation on keeping water protected.
I keep wondering why its been prophecied that the world in the end will wage war over water, not oil. And now I am beginning to understand.
Can you boil lead out of water, or does it just become more concentrated?
Americans that cry about how the system "doesn't work" really don't have a clue about how this would turn out in other countries.
It all looks like a game between Emergency Managers appointed by the governor to see who can save the most money fastest.
Full disclosure: my wife works as a reporter Michigan Radio, but generally doesn't cover Flint.
The Snyder administration will certainly pay a heavy price for "giving free handouts" to the Democrats in Flint, all to remedy a problem that many Republicans don't believe exists.
1. Is this because the non-operational ones can no longer station keep and slowly spread out?
2. Why are they in a belt shape?
3. Why is the belt not centered on the green ring? They seem to be all "moving" in the same direction? (When I looked at it, their orbits tend to "dip" south while above the western hemisphere and north while above eastern hemisphere)
[Edit] Scanning the skies above Russia and China for satellites with a non-specified mission doesn't bring up a single US satellite. So it appears that this data does not include spy satellites.
Leaving aside if it's allowed/legal .. I wonder if it's possible to establish communication with a nonoperational satellite, and what tools are required to do so
Just remember that there is no big conceptual difference between blocking and locking, which is what you end up doing when having shared mutable state.
Recommending anyone to check out CSP (like in Go & Clojure/script (the latter also with immutable data)) or Actors (like in Erlang, Elixir).
NodeJS works nicely with nonblocking IO, except what most people seem to forget is that the CPU is a resource too, which is still being blocked by NodeJS when handling any event. Multithreading would help alleviate this.
I'n not saying Erlang is always the right answer (I'm a fan of Go at the moment)... just that there's too much hopping-on-the-bandwagon based on seemingly a lack of awareness of the technology that's out there.
Imagine if all the effort making node work had been put into existing choices.
I recently had a really high cholesterol reading (both total cholesterol and LDL). Everything else (blood pressure, blood glucose etc) seems fine. I'm 38, fit and healthy and nothing that suspect in my family history. I found the attitude of my doctor in all this quite surprising. It amounted basically to "You definitely have familial hypercholesterolemia. There is no other possible option here other than statins". No further questions about what I eat, my stress levels, lifestyle - nothing.
What disappoints me the most here is now that I feel like it's all on me to determine what my real risk levels are and what's appropriate to treat this. I don't subscribe to the mainstream NHS view still heavily pushed that eating saturated fat -> high cholesterol == unhealthy as I think it's a lot more complicated (as this article shows). I don't like being is this situation as I'm as susceptible to human bias as the next person, and I'm not a doctor, however almost all of the high quality, science based writing I've read indicates that the mainstream healthcare system's view on cholesterol is wrong.
The article here offers a highly speculative opinion regarding the role of inflammation across many diseases. The luminaries in cardiology cited in this article ran trials which many of us consider to show that, rather than inflammation being important, any reason to start statins is a good reason.
The genetic data currently supports very little role for inflammation in important diseases like coronary artery disease, whereas there is crystalline evidence supporting the connection between LDL-cholesterol levels and mortality. Interestingly, when genetics are invoked and mere epidemiology is reassessed, there is no clear atheroprotective role of HDL cholesterol.
The treatment of statins is very much like the treatment of vaccines: dismissed in a pseudo-intellectual manner by people who know a lot (just not about the subject at hand).
And by the way, we don't really know why allergies develop at all. The best treatments simply tire them out, or suppress the symptoms. The immune system in general is poorly understood, so perhaps it's not surprising that people have trouble thinking past "inflammation" in general.
The idea that the human body has some pervasive fault or malfunction that can be addressed by adherence to an ascetic diet is not really new. One can find similar accounts going back hundreds of years in Western medicine, and even farther back in religions. For some reason, our minds seem to incline that way.
And it's true to some extent: obesity makes almost any disease worse, and obesity can be avoided or reduced by an ascetic diet. That is true for almost every human being, which is the highest standard that medical advice can meet.
Unfortunately, most diet advice does not meet that standard. By which I mean, it's easy to find counterexamples to most diet advice. A diet might tell you to avoid dairy, but there are millions of people who consume dairy and yet are perfectly healthy. A diet might tell you to take fish oil, but there are millions of people who never do, yet are perfectly healthy. A diet might tell you to avoid gluten, yet there are millions of perfectly healthy people who eat gluten every day. And again--I'm not talking about real allergies here, I'm talking about general diet advice.
The future seems pretty clear to me. We know that each person's genetic code is unique. We know that each person's genetic code is expressed in unique ways due to epigenetics and other factors. We know that each person has a unique collection of gut bacteria, skin bacteria, and other hangers-on.
Ultimately, if we want to create more perfect health, we will need to learn how to collect each person's unique information, tie it reliably to health outcomes, and then introduce highly personalized therapies based on that information.
The demand on information technology will be enormous. This will be a growth industry for humanity for at least the next century, I bet.
I recommend a vegan diet. Juice celery, and everything else.. drink it soon after juicing don't put in tha fridge. Juice about a half pound of raw cannabis per week if you can (it's not cured so it won't get you high)
and uh.. drink lots of h2o.
My opinion is that statins-for-cholesterol, obsession with cholesterol is as big of a medical scam, a BigPharma misdirection-for-profit, as BigFood's "low-fat" and "whole grain", and gluten scams.
Cholesterol is essential, so much so 90% is produced by the liver without any dietary consumption.
Cholesterol + lipids + calcium sticking to arteries is a reaction to an injury, mostly from systemic, low-grade inflammation. High blood pressure also injures arteries, also causing cholesterol plaque.
Some people with high cholesterol have no CVD, while some people with low cholesterol die from CVD. Maybe cholesterol isn't the problem?
Aspirin's help with CVD was initially thought to be due it blood-thinning effect, getting blood through narrowed arteries, but then its anti-inflammatory effect was more reasonable. btw, statins are also anti-inflammatory (aspirin and similar are cheaper).
Systemic inflammation reduced by aspirin (or statins), less injury to arteries, less plaque.
Systemic, low-grade inflammation also reduces insulin sensitivity, so the body produces more insulin, which is a really nasty hormone. result? adult-onset Type II diabetes.
So "I think" watching inflammatory bio-markers is more important than watching cholesterol, as one could take away from the New Yorker article.
An alkalizing, anti-inflammatory diet is key, complemented by both moderate resistance work and moderate cardio exercise, which also reduce inflammation.
"life-style" of diet and exercise is your best "Heal Thyself" strategy, not BigPharma.
btw, chronic, systemic, low-grade inflammation causes chronic high-levels of cortisone (derived from cholesterol) to reduce the inflammation, and wreaks havoc on the immune system, which of course causes inflammation as a response to injury or foreign matter.
1. Doctors (traditional, alternative, specialists, etc) are frequently just completely wrong about many things. And I don't use the word "frequently" just to be inflammatory, after seeing more than a dozen about the same issue, most gave contrary advice and opinions which inherently means that most of them are wrong. Often their being wrong was relatively harmless but occasionally it was devastating.
2. Lifestyle often matters. Diet and lifestyle changes were not a cure in this case but they did have a very significant effect, at this point more than any of the several medications tried. There does seem to be a general shift towards understanding and seriously considering epigenetic influences in general but we still seem to be in the dark ages when it comes to how to factor all these things into our medical decisions. There are just tremendous amounts of contradictory data, opinions, etc.
3. Most importantly, everyone is different. And I mean REALLY different. I am not talking about different as in statins only improving outcomes in a small percentage of participants in a study. I mean more like - I love peanut butter but it kills some people. I have seen drugs, supplements, etc that are generally accepted as great things, do tremendous damage. Some people thrive on a vegan diet while others suffer. One drug relieves chronic nerve pain in some and exacerbates it in another. There are very few universally good or bad things when it comes to health (yes, snarky commenters, asbestos is universally bad for anyones health - I mean when it comes to things a doctor/practitioner would recommend to a patient). Literally, one persons medicine can be another ones poison.
In summary, my only advice (which you seem to be following) is that you are the only one who will really look out for yourself and you are the only one who is really an expert on what you are dealing with. The best solution I have come up with is to get as much trusted-ish information as I can process and attempt to triangulate the data and move forward carefully and, honestly, somewhat intuitively. In addition to that, find a practitioner who actually listens, considers your input, and helps you come to reasonable conclusions. In my case, unfortunately, that doctor was about the 12th one (and actually an ND in this case) so hopefully you have better luck. Also, though she has been excellent, I still can't put blind faith in her as she is almost certainly wrong about many things as well but at least she knows and accepts that.
In summary of my summary, don't blindly follow anyones advice. Unfortunately, you have to come to your own conclusions on what to do and who to trust.
Among other things. My first question was, is the hardware open? Couldn't find an answer to that.
Edit: Apparently revision 2 of Purism will possibly have Coreboot.
I've wanted for years to run Windows and Linux on one laptop simultaneously via hypervisors -- not dual-booting, not not-OS-is-host, etc. -- but was of the impression that hardware/IO would not be feasible.
I hoped it would support 32GB RAM in 13" laptop, but maximum is 16GB RAM. Only option seems to be Portege R30 Skylake version (not yet announced), which has two DDR slots.
Doctorow's Law: "Anytime someone puts a lock on something you own, against your wishes, and doesn't give you the key, they're not doing it for your benefit."
Bull Mountain, Bullrun, Bullsh