The cminusminus domain is no longer valid (though it has more modern CSS), also it lacks links to all the informative papers!
C-- is very similar overall to LLVM IR, though there are crucial differences, but overall you could think of them as equivalent representations you can map between trivially (albeit thats glossing over some crucial details).
In fact, a few people have been mulling the idea of writing a LLVM IR frontend that would basically be a C-- variant. LLVM IR has a human readable format, but its not quite a programmer writable format!
C-- is also the final rep in the ghc compiler before code gen (ie the "native" backend, the llvm backend, and the unregisterized gcc C backend).
theres probably a few other things I could say, but that covers the basics. I'm also involved in GHC dev and have actually done a teeny bit of work on the c-- related bits of the compiler.
relatedly: i have a few toy C-- snippets you can compile and benchmark using GHC, in a talk I gave a few months ago https://bitbucket.org/carter/who-ya-gonna-call-talk-may-2013... https://vimeo.com/69025829
I should also add that C-- in GHC <= 7.6 doesn't have function arguments, but in GHC HEAD / 7.7 and soon 7.8, you can have nice function args in the C-- functions. See https://github.com/ghc/ghc/blob/master/rts/PrimOps.cmm for GHC HEAD examples, vs https://github.com/ghc/ghc/blob/ghc-7.6/rts/PrimOps.cmm for the old style.
Edit: Ok, I've found the following SO thread: http://stackoverflow.com/questions/3891513/how-does-c-compar...
If this has any truth (perhaps a different c--?) I'd like to know which one is being referred to.
An overview here: http://cr.yp.to/qhasm/20050129-portable.txt
scripting lang --> programming lang --> native code
I think now's a good time to go listen to Transformer again.
One of the tools regextr.exe is signed, but the tool kaspersky_tcpip_fix.exe that they have you run first is not.
It's amazing what kind of things slip by.
For those not aware, it's basically an offshoot of python(x,y), which is a really nice python distribution for windows.
Personally, I'm far, far too wedded to vim + ipython to use anything else, but it is _really_ nice to be able to point people using windows to python(x,y). I have nothing against commercial distributions like canopy or anaconda (which offer many advantages), but there are a lot of cases where a freely-redistributable option makes more sense.
I appreciate the great effort behind Spyder but I think the UI and the documentation, website etc lack a lot of polishing and attention. I have tried it a couple times and I never get around simple things, like installing packages and the environment.
On the other hand, only with spyder I get that many annoying trailing spaces for some reaseon.
I kept a diary for two decades, going through over two dozen notebooks in the process. In graduate school I also kept lab notes for years. I used cheap notebooks. Never did I see the value of Moleskines justifying their cost.
Am I missing something?
They only remind me of this Onion article -- http://www.theonion.com/articles/privileged-little-artiste-w...
Privileged Little Artiste Writing Something Oh-So-Precious Into His Moleskine Notebook
SAN FRANCISCOAfter gently unfastening the elastic strap keeping his dearest musings safe from prying eyes, little literary artiste Evan Stansky penned a few more darling thoughts into his clothbound Moleskine notebook Wednesday. "These are much higher quality than the notebooks you find at CVS," lilted the auteur, who couldn't be bothered to usedare it be saida journal of lesser craftsmanship or pedigree, or one not famously used by such legendary artists as van Gogh and Hemingway. "They're a little more expensive, but I try to write on both sides so I don't go through them as quickly." At press time, the princely scribe was seen finishing his apricot jasmine tea, asking a mere mortal sitting nearby to watch his literary accoutrements, and then prancing off to the Starbucks powder room, light as a feather.
I read aphyr's introduction. It was kind of like a warm "you can do it". It made me feel kind of warm fuzzy that someone was looking out for people like me, and other groups that are more obviously disenfranchised than mine (mine has it comparatively well off I think).
And because of this encouragement, I read the entire post. So was it effective, at least for me? Yes it was.
Cue HN comments. I read them and my first reaction was don't understand why it's not ok to say what aphyr said. Never mind that he put it in the "who is this for" section, which seems (to me) to be an eminently appropriate place to put such a thing.
But as I read more of them, I began to wonder whether what I thought and what other people think are so different that I'm just never going to fit in with this community.
I began to doubt myself. Eventually the entire effect of the introduction was reversed. Soon I felt worse than when I started.
Then today happened. Let's consider some facts.
* I saw yesterday that aphyr wrote a book, and in his "who this is for" section he wrote that it's partially to encourage underrepresented groups to program.
* That was part 1.
* This is part 2.
* Not only were the HN comments on part 1 dominated by this issue, but also the HN comments in part 2.
* So, merely writing this once is enough for the issue to follow you around in subsequence posts.
As a member of one of these underrepresented groups I'm both shocked and -- honestly? -- kind of hurt. If aphyr can't write this in the "who is this for" section, then where is it appropriate to have this discussion?
People of HN, you may not be convinced that this was the right thing to do, but do know that this type of discussion is actively hurting your ability to be diverse.
"By default Clojure operates with natural numbers as instances of Java's long primitive type. When a primitive integer operation results in a value that too large to be contained in a primitive value, a java.lang.ArithmeticException is thrown. Clojure provides a set of alternative math operators suffixed with an apostrophe: +', -', *', inc', and dec'. These operators auto-promote to BigInt upon overflow, but are less efficient than the regular math operators."
So you can write
(inc (bigint Long/MAX_VALUE))
The flip side though, the monkey in the wrench as they say, is that the projects that participate have to conform to the rules of participation. In CPAN's case its a standardized dependency, build, and install models with required unit tests. So perhaps the first step here is to provide an incentive for clean build processes on a project.
I wish someone made a similar way to share libraries like we can do in PHP (https://packagist.org) or Ruby (http://bundler.io/), but for C++.
Well, no surprises there!
Nissan has a 24 kWh battery for a reason: they can sell the car, after incentives, for about the price of a Prius. Doubling the pack would probably add about 10 grand to the price. If Nissan thought that was where they wanted to position themselves in the market, they would have done so already.
Though, personally, a 48 kWh Leaf would be pretty awesome. I'd pay an extra 10 grand for that. It would be a nice middle ground between an 80-grand Tesla and the current Leaf.
I guess a 48 KWh Leaf is about the same as the 60 KWh Tesla in range? (it might not be; cheaper/lamer vs. physically larger but more advanced might be a wash).
There are countless examples of giant companies that haven't realized the enormous future cashflows embedded in their stock prices. It is obvious that Amazon generates lots and lots of revenue, no one disagrees with that. What remains an open question is will they be able to ease off of massive infrastructure investment and actually harness the future profitability that everyone seems certain of at this point.
These straw man arguments that profit-less companies have huge market caps so they will turn hugely profitable ignores the actual argument of whether they can actually achieve that. Can profit does not equal will profit.
It's amazing to me that folks in the Silicon Valley/VC world continue to look at the performance of publicly-traded tech companies and for some reason never seem to consider that, as deserving of premium valuations as some of these companies may be, much of the crazy price action of the past several years has been driven by the Fed.
You can't seriously look at the charts of companies like Netflix and Tesla and believe that this is the result of DCF analysis.
> If you believe, as Amazon management does, that the future growth is going to be there for Amazon, then you ignore the current P&L and think about what a future P&L might look like.
> If you think that Salesforce and Workday can continue to grow their revenues at or near their current growth rates, then you ignore the current P&L and think about what a future P&L might look like.
How about this: if the Fed balance sheet continues to go up and to the right, ignore fundamentals, pick momentum stocks and think about what their future charts might look like. It doesn't take a lot of imagination: up and to the right.
Of course, when the fun ends, "profitless prosperity" will indeed be profitless for the folks who didn't cash out in time.
A problem I get into at the end of the second article is that gamma-correction is very important for good image scaling results. However, almost nobody gamma corrects during scaling, even today.
edit: comparison http://imgur.com/a/fC8iQ#1
Most typically in that domain you also use windowed sinc filters and there's a ton of literature on the tradeoffs of specific window designs, as well as very fast fixed point implementations etc.
It's all pretty interesting stuff and trying to make it run efficiently on modern mobile hardware is a fun challenge.
We even took the exact same artwork and made an 8 foot tall banner with it, and it looks great!
If there was a popular phone platform that was even more open, while providing the same levels of functionality, I'd be very tempted to give it a go.
i bought my Android device 2.5 years ago it has only get 2.3.5 , initially it was 2.3.3.
Who needs RMS when this guy is on the case.
"Replicant is a fully free Android distribution running on several devices."
If you're waiting for all browsers to implement these CSS3 features correctly, you'll end up never using them.
For example, CSS animations have been around for at least 2 years. Only Firefox and Webkit browsers supported them and you had to rely on vendor-prefixed properties, but animations were exciting enough for me to start experimenting with them as soon as I could.
There is permanent rant towards "fancy" CSS3 features that haven't reached a "W3C standard" status yet. But a website experience doesn't need to be visually consistent across browsers. If you're using a plain hexadecimal color code as a replacement for rgba, it's ok. If your intro is animated in Chrome but not IE, it's ok too. If your last paragraph has a margin that the pseudo-element :last-child should have canceled, it's ok as well.
Front-End developers have waited more than a decade for CSS improvements. You can't wait for a unified browser environment to start implementing them. I'm surprised by this article's tone, as if today, suddenly, everything changed. No way: browser support is a permanent process. Don't wait for that perfect day because it will never come. Just start having fun with CSS3 while providing a decent experience for IE (the main culprit).
It's a lot of additional effort, but it's worth it and definitely doable if you need the browser support and you want to use modern techniques.
In most other cases all authors really want is `box-sizing: border-box` (which works in IE8+).
Using a combination of the older and new syntax (display: box and display: flex), you can already do quite a bit.
I've been using autoprefixer (https://github.com/ai/autoprefixer) for a few months now, building simple flexboxes all over the place with the attributes from the new spec, and have only run into one or two little issues that I can generally hack away.
And not only are height and width affected by padding+borders, they are affected in different ways, respectively. It's just weird. Does anybody have any insight into what the standardization bodies' reasons were?
That being said, when I switched to e-cigarettes, it was very easy to switch back to regular cigarettes. That "narrow bridge of familiarity" was easy to cross back over, especially when I ran out of vapor cartridges. Ultimately for me to quit it took stopping cigarettes cold-turkey. Nicotine gum and a transition to regular gum helped a lot. It's been one of the hardest but most rewarding things I've ever done in my life.
My take away was that the federal government eventually will get involved and start regulating the shipping of these items. Unfortunately the stuff people are getting now is mostly made in China and you can't be too certain of what the heck you are getting when you take a big inhale and just taste the apple flavoring or whatever. Who knows what chemicals are used to make them.
Most of the testimonials I received was from people that were able to quit smoking because of my product, but I definitely got some complaints as well. The biggest benefit of the e-cig is that is is little to no smell. Definitely hit them on airplanes before with no problems. It is an interesting market though where it seems like new types of e-cigs are coming out all the time.
Another thing about e-cigs is that people are making them into e-joints. This is very popular especially in medical marijuana states. Before they banned synthetic marijuana, we were developing a synthetic marijuana e-cig. Would have been fun to have around, but ultimately I am glad not to be filling in mass them any more!
1) Quotes from CEO
2) Little anecdote from Chief Marketing Officer
3) Big vision "Vaping" becomes common place.
4) Key differentiators: "building e-cigarettes that look, feel and perform like the real thing" (this is sprinkled throughout)
5) Celebrity endorsement, Big names: Peter Thiel, Brunno Mars
The rest (post first page) is a nice over view of the e-cigarette industry, and some challenges facing NJOY. I wonder how long they were working on a NYT article, of if this was something pretty easy for them to get, given interest around e-cigarettes.
I don't care if it's just steam, I don't want it in my face.I can't believe people don't get this.
I'm not a smoker, but I can't see why e-cigarettes should be anything more than a way to help stop smoking.
I never smoked cigarette in my life, but I tried e-cigs as a replacement for coffee. It works, no hassle with tea or coffee preparation, also it is probably healthier. But being labeled as a smoker puts me off.
How were the results for you ? And does anybody know about research or regulation of this types of ejuices ?
And since those same MAOI affecting compounds(probably beta carbolates) found in tobacco ,are found in brewed coffee, kinds of seasoning, grilled foods and other stuff, have anybody noticed a combination of vaping and some food more effective ?
Most public health officials seem to agree that the levels of toxins in e-cigarettes are far lower than those in traditional cigarettes. But they also say that far too little is known, not just about potentially harmful aspects of particular brands of e-cigarettes, but also about whether there is harm from secondhand vapor. Dr. Glantz of U.C.S.F. says that in the absence of data, indoor smoking bans should also cover e-cigarettes.
The FDA is collecting reports of adverse effects and there are plenty:
I understand why my mother started smoking when she was 16 and then smoked a pack a day for the next 43 years until she died from cancer. Why in the world are people starting to smoke today with everything we know about it?
What I find interesting though, is that while the spray does indeed get rid of the cravings, it (and the e-cigs I've used) is not the same "feeling" as smoking. This is possibly because of the lack of any MAOIs in the liquid itself.
How many of those using e-cigs here are ex-smokers? How many picked it up because it's a socially "acceptable" drug that you can now take without killing yourself slowly? I find it such a fascinating topic!
Below are the "results" sections from the summaries of the two non-saliva articles among the 3 referenced in the posted "article".
From "A two-year follow-up study of risk of depression according to work-unit measures of psychological demands and decision latitude."
RESULTS: The OR for depression according to psychological demands was 1.07 [95% confidence interval (95% CI) 0.42-2.49] for every unit of change on a 5-point scale. The corresponding OR for decision latitude was 1.85 (95% CI 0.55-6.26). No interactive effects of psychological demands and decision latitude were observed.
CONCLUSION: These findings suggest that low decision latitude may predict depression, but confidence intervals are wide and findings are also compatible with no increased risk.
From "Work-unit measures of organisational justice and risk of depression--a 2-year cohort study."
RESULTS: Working in a work unit with low procedural justice (adjusted ORs of 2.50, 95% CI 1.06 to 5.88) and low relational justice (3.14, 95% CI 1.37 to 7.19) predicted onset of depression.
CONCLUSIONS: Our results indicate that a work environment characterised by low levels of justice is a risk factor for depression.
"Surprisingly, the study indicates that a heavy workload has no effect on whether or not employees become depressed.Instead, it is the work environment and the feeling of being treated unfairly by the management that has the greatest effect on an employees mood."
I guess there are many ways to improve the work environment (and many ways not to), but how do you improve fairness. Isn't it too late the moment you recognize unfairness?
High cortisol is however associated with schizophrenia, for both present and fetal conditions.
ObDisclaimer: Dammit, Jim, I'm a programmer, not a doctor.
My favourite example of this was trying to deploy an app within the VA that was written in Django. I was told "Python is not on the list of acceptable languages." So we came back to them and said, "Good news everyone, we ported it to Java." Of course, it was just Jython, but that's the sort of stuff you encounter.
Multiply this by the complexity involved in trying to herd all these cats into one backend like healthcare.gov and it was doomed to fail.
Let me fix that statement: "The front end technology is not the worst problem here."
CGI obviously borked this project. The government deserves its own special classification of criticism, but poor planning, change management, etc. from the government is no excuse for CGI not building an architecturally sound web site.
The contract was $350 million? Good grief, they overpaid. Nonetheless, if we could go back in time AND assuming we needed to spend this budget, here's what I would have done:
1. We make investments of $15 million in 20 different startups, and tell them to implement the initial phase -- let's say we call it the "minimum viable product" or MVP. Each startup has the same deadline for delivery.
2. On the delivery date, all companies meet with us to review their MVP. We call it a "demo day" and view all 20 demos.
3. Through some set of criteria, we create a short list of five companies from the 20 demos. Those five companies receive an additional $5 million investment, and another delivery deadline.
4. The companies iterate on their MVP and come back for another demo, this time with a deep dive.
5. We pick a winner from those five. The winner gets another $25 million investment and is responsible for any additional work to be completed.
TechStars for government, essentially.
Because integration means integrating _requirements_, leading to determination and priority of requirements. The current organizational structure doesn't seem to have anyone responsible for even coordinating that. But even if there were, they would need terrific knowledge of each agency's internal systems and legal requirements to determine what is and isn't necessary. And enormous authority, meaning both credibility and power to dictate, to get their determinations to stick.
Absent someone looking over the process, each agency will just "require" everything they might need or want. Leaving something out is risky, unless you know a lot about what you are doing and what will happen next and trust your management. Even if they had all those latter characteristics, bureaucracies don't do risk.
We all know how complexity grows exponentially. I bet the requirements document for this thing doesn't exist, and if it did it would be a clusterfuck of epic proportions.
Here is my wild theory: The possibility this could succeed died the day Tom Daschle withdrew his nomination for Secretary of HHS. Not that Daschle himself is special, though he is pretty bright. But he was slated for an unusual joint role, running HHS and a White House appointment running the health care effort. A position like that might have had access to the specialized knowledge to know what needed doing and the Presidential delegation of power to get it done. If IRS says "we must have X" and Daschle KNOWS they don't because a real expert knows they don't, he can get them in line or they can explain the problem to the President's chief of staff.
Here is the wild part. Daschle was canned, inexplicably, over a truly stupid tax issue (didn't declare a car service as income), while others had far more serious issues waived (Geithner lied about CASH income despite instruction to declare it). Why? I speculate, precisely because the role he designed for himself was remarkably powerful, and effectively outside any review because of the complexity and specialization of its task. Wouldn't the President want someone with the power and knowledge to implement his most important policy? Yes, but not someone beyond his control. Politicians are about power. JFK didn't use the legislative skill of Johnson because he feared Johnson would serve Johnson's interest, not Kennedy's. Once Obama and his people realized that Daschle could become effective President, and Obama something of a titular head of state, they shivved him.
It's all speculation. But it is all plausible enough to suggest why government doesn't work. Massively complicated projects like Google work because its people are, by and large, working for a common purpose on tasks that are commonly understood under common accountability. Government and bureaucracy are fundamentally divided in purpose and understanding. The components can be united by power and knowledge, but by its very nature the system resists establishment of such power and knowledge.
Stick the processing pipeline in Twitter Storm (which can retry any step until the whole pipeline is done) and structure the requests as nearly-idempotent (so a repeated reply is harmless, and the first arrival associated with the ticket wins). Finally, you have an "inbox" where people can wait for and see their answer, with optional SMS and email notification.
This does support the widely held disbelief that this system will be fixed anytime soon. Clearly the management of the project and the design of the architecture are/were fundamentally flawed, and its very unlikely that it can be fixed in 30 days or whatever at this point.
30 year old (1983) mainframes and databases were designed to handle large transaction loads. For example, airline reservation systems and banking systems were built on them.
And upgrading a mainframe (at least an IBM mainframe) to a faster mainframe isn't such a daunting task, since all the code from 30 years ago (or even from the 1960s) is still object-code compatible with the new machines - you can make it run even if you've lost your source code. There's still lots of 30 year old (and older) Cobol code running on mainframes today.
I agree that re-writing the 30 year old software would be hard, but simply getting it to run faster could probably be done just by spending money on the latest mainframes and disk drives. But if nobody ever did a load test on the site, they wouldn't have known that they had to do this. They probably just thought: "Oh, we have to write a web site that talks to a bunch of databases, how hard could that be?" (By the way, they could have written test code to do a load test on those legacy systems without even having a web site running. In retrospect, that's the first thing they should have done, and it would have shown them that their critical path wasn't the user interface.)
My understanding from previous coverage is that some of the state exchange sites, such as California's, are performing acceptably. If that is true, do those state sites also connect to and query the same legacy systems as the federal site? If so, why doesn't the federal government simply ask for or take that code? Surely it's been made available to them? If not, are the legal requirements for the states' exchanges somehow different than the federal site? That seems unlikely since my understanding is the federal site is simply standing in for states that elected to not create exchange sites. I don't see why it would be subject to extra requirements.
What am I missing here?
Government projects like the healthcare exchange don't have that degree of freedom - if they go down the wrong track, the only choice is put in more resources until it's back on track. Giving up or changing objectives isn't a decision under the control of the project - it's a legislative or budgetary question.
The problem is a system where if you don't deliver you get paid millions of dollars and still get jobs.
Have people enter their info, then show them a screen that says "your quote will be emailed to you in 24 hours." Then the integration system has 24 hours to retry any failed data pulls, match up all the data, and generate a quote.
This is not the contractors fault. Its the government. Before I left to work with a startup, I was abhorred by the lack of ownership on the client's side. Everybody is looking to shuffle responsibility, keep the lowest profile, and do the least amount of work.
It doesnt matter who's writing the code, unless they find somebody competent and passionate on the government side, large projects are destined to fail and better left off to be written by the public sector. This is government waste at its best.
I'm neither republican or democrat but just to add, if my rinky dink app I was working on for the Dept. Of Commerce gets shown to the president when its in 'ALPHA' state, there is no way the most informed person in the world didnt know that the site was going to fail from the get-go.
This is absolutely incredible.... two weeks?! Dealing with these legacy systems should have been the absolute first thing tested, is it not the most likely point of failure/bottleneck? Someone on the team had to have been screaming about this and ignored, all the while shitting their pants waiting for go live for the whole thing to crumble.
The press seems very focused on the obvious availability and performance problems as well as the errors that come up within the sites that prevent someone from completing their application. There are a whole slew of second-order defects that make it appear your application was successful and correct but were based on incorrect calculations, incomplete data, or other bugs that are not obvious to the user at the time they complete the process.
Database One: [=======----------] Database Two: [============-----] Database Three: [==---------------]
I'm not in the least bit surprised to see that a lot of the work and resulting problems with healthcare.gov are on the backend.
I just wish the government realized that we have all these amazing developers over in the Bay Area that can do a better job than the majority of those developers currently writing software for government contracts. I'm shocked no one in government has said to themselves "What do we have to do to make our software problems accessible to the types of engineers working at the Googles and Dropboxes of the world.
"Everyone outsources large portions of their IT, and they should. Its called specialization and division of labor. If FedExs core competence is not in IT, they should outsource their IT to people who know what they are doing."
These days I believe each department of government that needs an iPhone application would do better to hire an iOS developer full time to maintain and polish the fuck out of it, continually.
The sheer complexity of this rent-seeking indirection makes keeping track of the millions of distinct participant-instances that can play out in hundreds of different ways, involving integrating tens of massive legacy systems with new, flexible business logic (for a law in flux), impractical.
With single-payer, they could have scrapped the vast majority of this complexity.
Time Magazine's "Bitter Pill" article stated Medicare had an IT system that made them more efficient than private health insurance providers. Isn't such a system large enough?
Part of me has been ignoring a lot of the chatter around the ACA as potential right wing fabricated drama. Too much noise and bilateral bullshit being thrown about these days.
That was until a few days ago, when I would learn our insurance has both more than doubled in cost and is also scheduled for cancellation. Doubled and cancelled. All as a direct result of the ACA. Brilliant! To say this was shocking is an understatement. Our annual cost will go well past $15K.
There's a tragedy of unintended consequences, side effects and direct effects, being played out in the background that hasn't completely come to the surface yet. We certainly can't be the last family to get news of this kind. That means in the coming months it is likely hundreds of thousands, if not millions, of additional individuals and families are going to receive these dreaded letters. Apparently hundreds of thousands already have. Last week was our turn.
At one point this and other issues will be difficult to ignore. And they will dwarf the IT issues. The website, as much of a disaster as it is, is likely to pale in comparison to all of the other, non IT, issues.
Some of what's happening is related to the incredible disconnect between Washington and technology. All you need to do is listen to some of these folks talk about the website issue to see how little they understand. I heard one senator say something akin to "they just have to re-enter a list of five million codes". In other words, the term "code" to some of these guys means "numbers" and that someone made a data entry error in copying "codes" into the website.
BSS (Balaji Srinivasan) covered some of this in his excellent Startup School talk:
A talk which, he comments, has been mutated into something far different from what he said by the modern equivalent of the "broken telephone" game.
I agree very much with his suggestion that an "exit" is required. Not meaning that we ought to pull-up roots and go, but rather that the tech community ought to almost ignore the dinosaurs and go ahead and evolve a society more aligned to modern realities. In his talk he gives examples of various US cities that have been "exited" to some extent through technologies developed in the free market.
To some extent, it's an Innovator's Dilemma kind of a problem.
The only way to make step changes is to do it well outside of the organization looking after the status quo, because that's all they know and that's all they can focus on.
They're going to continue to suck royally, as royalty does.
Yes, even the user interface code for an external testing device used by medical doctors has to have a complete code review by the FDA (I know of an example). So products that look rather simple and inherently safe to laymen can take years to get to market by the time all regulatory approvals are obtained. But the article kindly submitted here immediately caught my eye with examples of medical device applications of biodegradable electronic circuits. There should be a lot of private industry uptake of further research and development of this technology, which someday may be part of routine medical practice as you and I visit physicians.
Thus, if a company spends money to build or acquire a new asset, it is called capital spending and it is not subtracted from the profits. Thus, for example, if a company had a million dollars of profit and decided to spend these million dollars on a new fulfillment center, they could spend the money for their fulfillment center and still report a million dollars in profit.
So it is not quite clear-cut to say that Amazon's desire to build fulfillment centers around the world is costing them their profits. Those things should be capitalized and once they are capitalized they should not affect the profits. Amazon did in fact report significant capital spending (as one can see on their cash flow statement).
However, things are not that simple. Sometimes some expenses which are about building for the future and investing into new growth are not capitalized. This is the case because for some expenses the benefits are so uncertain and difficult to quantify that the SEC requires that they are reported as ordinary expenses instead of capital spending. These types of expenses tend to involve R&D and may include certain administrative expenses associated with growth initiatives.
Therefore, many companies that are trying to grow do report lower profits because they have those expenses that are associated with investment into future growth but are not capitalized. This may be the case for amazon. But it is a question to what extent it is the case for amazon. For example, they do capitalize software and website development for new products and websites. So one cannot simply say that they are showing losses because they are spending all the money on making great new products. But then again, they expense software development for existing products. So perhaps the losses are associated with new growth features that are built into existing software.
So all in all it is a big muddle and it is not at all clear whether amazon is an inherently highly profitable company that happens to be investing in the future, or they are wasting money, or their business model is just not that profitable.
In true Amazon "dominate all retail by making it accessible to consumers" their relatively new "Fulfillment By Amazon" service drastically simplifies consumer reselling by eliminating the need for the consumer to do the "packing and shipping".
It's an amazing service, and they are getting darn close to the "just ship us a box of your stuff"
I bet that we see that inside of the next five years, there are lots of problems (like what is / is not valuable) but you can see them already working around these issues by only accepting items with modern barcodes, charge small warehousing fees if something sits too long in inventory, etc.
Bezos wanted AWS to be a utility with discount rates, even if that meant losing money in the short term. Willem van Biljon, who worked with Chris Pinkham on EC2 and stayed for a few months after Pinkham quit in 2006, proposed pricing EC2 instances at fifteen cents an hour, a rate that he believed would allow the company to break even on the service. In an S Team meeting before EC2 launched, Bezos unilaterally revised that to ten cents. You realize you could lose money on that for a long time, van Biljon told him. Great, Bezos said.
Bezos believed his company had a natural advantage in its cost structure and ability to survive in the thin atmosphere of low-margin businesses. Companies like IBM, Microsoft, and Google, he suspected, would hesitate to get into such markets because it would depress their overall profit margins. Bill Miller, the chief investment officer at Legg Mason Capital Management and a major Amazon shareholder, asked Bezos at the time about the profitability prospects for AWS. Bezos predicted they would be good over the long term but said that he didnt want to repeat Steve Jobss mistake of pricing the iPhone in a way that was so fantastically profitable that the smartphone market became a magnet for competition.
I don't have the most broad corporate employment history, but as far as it extends, I've met tons of people who feel like they could join a competitor to their own employer and win against them within a decade or so. I have never met a single person who worked at Amazon that has felt that way about competing against Amazon. Even if that competitor had the pocketbooks of Wal-Mart. To me, that speaks volumes about a business strategy.
If sales ever plateau and investors force you to generate profits, the plane stalls and the whole thing spirals down, because it's the profit reinvestment which actually drives sales growth, and actual profits attract competitors who have been unable to pull off the profitless-hyper-growth trick. So far that hasn't happened.
Amazon's value is in the entire business and not the sum of its parts, which means that at some point, investors expect to own a profit making enterprise and not a bunch of warehouses. However, that won't happen until sales plateau or Bezos dies. Ironically, at that point the business loses a lot of value, both because growth has stopped and because competitors are about to enter the space, emboldened by Amazon's newly discovered profits. The whole thing is a bit of a sham. Any growth industry (Internet retail) can support only one "no profit rocket," and eventually it comes back to earth when that industry matures and ends the hypergrowth phase.
Which 100% vindicates Eugenewei's point about tech companies being wary of capital markets.
On the other side of the gorge of eternal peril: Cash is king, and should not be underestimated. Or those with the war-chests may try to puke all over Bezos' cake by mistaking lack of current reserves for an actual weakness. I'm sure Bezos is fully aware the ridge-line he's walking on. He probably has aces up both sleeves to clobber anyone that tries to make a move.
Long term, I'd say walmart continues to cash in on the greater unwashed that don't know any better for b&m impulse buys while amzn goes after suppliers and logistics, maybe even an Ali Baba and/or Kickstarter to bring in more product pipes.
"Giant, heavy electronics items that Amazon sometimes ships for free when the shipping cost is clearly non-trivial and cost more than the usual thin margins on such goods are another."
"But if you sell a glass of lemonade for $2 and it only costs you $1 to make it, and you decide business is so great you're going to build a lemonade stand on every street corner in the world so you can eventually afford to move humanity into outer space or buy a newspaper in your spare time, and that requires you to invest all your profits in buying up some lemon fields and timber to set up lemonade franchises on every street corner, that sounds like a many things to me, but it doesn't sound like a charitable organization."
"The vast vast majority of products Amazon sells it makes a profit on."
It should be relatively easy to rephrase most of the language. For example, the last sentence should be worded: "Amazon makes a profit on the vast, vast majority of products it sells."
I think it would be worth it. I can't understand a lot of the post without effort.
Someday, Amazon will need to face the brutal reality of profit.
(I feel guilty to mention, but this reminds me something of the 1000 Year Reich sorts)
However at some point it's important to be able to say that they have played out the majority of their growth ambitions and are ready to start optimizing the business for greater profit.
The trouble is that human nature for many CEOs with big egos and the structure of corporations is to want to continue to grow forever. This is a dangerous attitude. For example perhaps Microsoft shareholders would have been much better off if the company was run without ANY ambitions to compete with Google, Apple OR to dominate mobile or tablets or search or any of these areas. Instead if Microsoft was to just focus on Windows and Office and extract as much profits from the business as possible, then return these profits to shareholders, then the shareholders would be free to invest in Apple and Google stock.
The trouble with this is that for an ambitious CEO this might feel like giving up. I don't believe it's giving up. it's called focus. Focusing on what you are really good at (in this case Windows and Office), rather than pretending that you are great at everything.
For example Apples massive overseas cash pile that they dont want to repatriate and pay out to the owners of the company
Different medium and market, but basically the same overall strategy.
On the other hand, Jeff is likely more interested in just growing the business than counting profit dollars.
Just raise the gas tax to make up for lost revenue. Sick of hearing "we are addicted to oil" every state of the union and the answer is right there. Solves the road revenue problem as well.
The rate listed in article (18 cents/gallon) is not a big influence to prevent driving - it's ten times less than the gas excise tax I pay in EU, which comes out to a bit less than $2 per US gallon. Business doesn't stop because of it, and there's extra motivation to reduce the polluting transportation.
It does seem like raising the gas tax would be an easier option.
But obviously if there's a hard failure, they aren't always going to be able to give you the amount of time you'd want. Generally speaking, you should have accounted for this situation ahead of time in your engineering plans. Amazon EC2 doesn't have anything like vmotion, it's just a bunch of KVM virts.
If you're using the GUI, the first time you try a shutdown, it will do a normal request, but then if you go back and try it again while the first request is still pending, you should see the option for doing a hard restart. Try that and give it some time. Sometimes it takes an hour or two to get through. Otherwise, Amazon's tech support can help you.
I think Amazon needs to put a lot more effort into educating people about the best practices involved here - creating immutable and disposable servers, make it easier (console access) to create availability groups, etc.
Anyone who's surprised that this happens has not used EC2 very much. It is this way by design.
Then it kept running, but there was no way to reboot it from EC2 console or ssh, so that was a bit of a problem, had to get support to do it.
Moral - reboot it yourself at a convenient time.
For example, I have a client who has some algorithms and data that are potentially quite valuable. EC2 and other AWS services would be a huge help with their project, but is there any way measures could be taken to ensure that no one - even Amazon employees - can get to their code and data?
Edit: devicenull makes some good points - I guess I had the CIA's $600 million AWS contract in my head when asking my question.
Notification that your system is on old hardware that has been deprecated is part of the price of doing business in this cloud system.
As others have noted: yes, it is a little tense (is this my production database or my Continuous Integrations machine) -- The email you get just gives you an aws-id token, so you must look it up.
but, AWS has enough components that help you build resilient systems that, if you've done you job correctly, you shouldn't care about these messages other than the labor of spinning up a replacement.
I know you are trying to help but you need to realize that the whole section, and this part in particular, is incredibly condescending and guaranteed to piss off any female who might be reading you.
You want to help achieve gender equality in the technical field?
Pretend that the gender of your reader is of no consequence and just write your stuff, period.
My kingdom for a decent comparison between NodeJS+ClojureScript vs vanilla Clojure (w/Compjure maybe?) for high performance web applications!
I loved your Jepsen series and you communicate on a level that I can relate to. As such I was thrilled to find your guide at the top of HN.
Keep up the awesome work!
The liberal justices voted as a bloc together in CLAPPER, DIRECTOR OF NATIONAL INTELLIGENCE, ET AL. v. AMNESTY INTERNATIONAL USA ET AL. to try to challenge the constitutionality of warrantless wiretaps, and I expect much the same from Kagan, Ginsburg, Breyer, and Sotomayor in this case.
Of the conservative justices, Roberts, especially given his tendency to try to hit some home-run majority rulings for his legacy of being a "by-the-rules" arbitrator, and his pronouncement of privacy issues as being the paramount constitutional issue would be most likely to flip with the liberals. With that said, his previous defense and work on behalf of Bork, and his theory of a lack of privacy in the Constitution does leave a bad taste.
Justice Kennedy unfortunately cannot be counted on when it comes to privacy issues. His majority opinion on Skinner v. Railway Labor Executives enumerating that the government could violate the privacy rights of railway workers by subjecting them to drug tests due to a "special needs" exemption where the Fourth Amendment could be ignored if it was deemed to be in the overriding interest of public safety is the basis of the NSA's metadata collection program---see: http://www.nationaljournal.com/nationalsecurity/how-justice-...
He's still the second most likely to flip because Scalia, Alito, and Thomas are basically lost causes. Scalia basically called a general right to privacy in the Constitution rubbish, and it's unlikely either of the three will bend their ideological bent that the "national security agencies" know best.
The votes might be there. It probably hinges on Roberts. But significant positive changes to how the American government deals with privacy issues could happen. Again, the votes might be there, which is better than never discussing the issue at all (or discussing them in dark, dank courtrooms nobody hears about).
Cause for hope goes exponentially up if one of the conservative justices retires and is replaced by a young liberal justice attuned to technology much as Kagan is. If that happens, this likely scenario becomes a most likely scenario.
Wildcard: The Supreme Court actually doesn't know anything or very much at all about technology. They still pass paper briefs among each other instead of email...a strongly written amicus brief in this situation by technology-savvy leaders could well tip the balance.
This has the potential to get the whole program killed. Did someone in charge with both the clout and the morality to do the right thing take a risk? Some other reason? This fascinates me.
I don't want to distract anyone from the conversation but I don't understand the case. The prosecutors are just accusing him of planning to join militants? No actual firm conspiracy/plans to actually cause any physical harm? No actual target to attack?
What crimes are the prosecutors trying to prove here?
Also, based on their past decisions how would the SC rule?
Then it starts adding features. Then it starts getting big. Then somebody starts offering enough ad money that maybe the idea of a tiny little banner ad isn't such a bad idea after all. Then a few years go by, somebody discovers a rootkit in the installer for the 300mb version 188.8.131.52, and gets annoyed enough to once again implement the protocol in 22kb, name it "scrunchyTorrent" and release it.
It's quite fascinating.
I can manage the downloads from any internet capable device from basically anywhere.
Adding CouchPotato and a branch of SickBeard to the mix make it brilliantly easy to download just about anything, automatically, without searching for anything other than the specific title that I'm looking for.
Private trackers are the only reason why I am still using torrents as there are some super specialized small communities around that share otherwise incredibly difficult to get material.
Looking briefly at the code, it appears to be c++ (which is fine) -- but also entirely without tests? Or did I miss something?
There's problem with less well-known client is that private trackers may not allow them which make it useless for users who use those trackers.
The idea of a minimalist cross-platform and open source Bittorrent client is great though, I really wish there was some good alternative to replace uTorrent.
It's not clear form the site how this is different from any of the other gui based liborrent software such as qtorrent or halite. Does anyone know if this has any unique features?
Total Installed Size: 40.79 MiB
If you see the changelog, feature creep already started.. Sorting, etc. all that could have been piped to a specialized program.
People never learn.
Just one thing to point out regarding the final example: read_csv will actually fetch a URL if it's given as input, so there is no need to use urllib2 and StringIO. Instead, you can just do:
from_url = pd.read_csv('http://www.example.com/data.tsv', sep='\t')
One thing I do have a issue with in pandas is the type conversion on sparse data, i.e. a column with missing values.It's a pity you can convert that to a float for example.
One thing I would point out for new users is the .loc and .iloc functions which I think make selecting data more intuitive because they are a bit more explicit.
git checkout master git pull git checkout -b my-working-branch git add <files> git commit -m "some description" git add <files> git commit -m "review comments or other changes" git checkout master git checkout -b my-working-branch-squashed git merge --squash my-working-branch git commit git checkout master git pull git cherry-pick <hash from squashed commit> git push
So I am fine with this weakness. It doesn't impact their core product. This feature is still vaguely useful for less technically literate people, but maybe needs some kind of disclaimer.
Unfortunately, passwords are retrievable out of the LastPass vault in exactly the same way as in the article. It is trivial to simply inspect the DOM and pull them out with some basic JS. This is unacceptable IMO and must be fixed; LastPass is barely functional if you don't keep it logged in. But if you do, all it takes is a right click and a few keystrokes to reveal each password.
I feel a lot worse about this product, now.
Think about how easy it will be for a company to prosecute the "hacker" who was able to circumvent the security of highly-reputed LastPass to do whatever minor thing they did. LastPass uses strong cryptography and blah di blah blah, after all, so this must be a hard-core hacker who needs to be made an example of.
I understand why the feature is useful -- it's a sort-of "honesty lock" that's easy to get off, but it's obvious to the user that they're not supposed to take it off -- but LastPass should change the language around it so that non-technical users understand that regular people, non-experts, can bypass it.
Either way, sharing the password assumes that you are giving them the ability to login to your account. If the person you share with wants to give the password to someone else, it doesn't matter if they can see it or not. They can just share the password to their LastPass account. In other words the fact that they can see the password doesn't change anything from a security standpoint.
I suppose the one exception is a situation where you wanted to use the same password for your email and your bank and only wanted them to share access to your email but not let them see the password so they couldn't log in to your bank. This has a lot of security problems even if you aren't using LastPass or sharing your passwords. LastPass does warn you not to use the same password on multiple accounts unless you explicitly turn the warning off.
Whether through burp or through Dom inspection there is not much possibility to share an account without them reading the password.
The feature of sharing an account is, by definition, insecure.
The best solution, if you must share the account, is to use LastPass and change the password after they use the account and let LastPass remember the new password.
I know one person that used to use LastPass. He was a coworker that would utilize it on a shared terminal server and he would select the option to remain logged in. I logged in as me on the terminal server, copied his Chrome cookies file to another account and was immediately able to log in as him to LastPass and access every single one of his passwords. He deleted his LastPass account that day.
There are plenty of ways to address this and other inherent security issues with it but I don't see evidence that the majority of non-technical LastPass users are utilizing any of them.
Here's a discussion that I found about the issue I discussed on LastPass' forum:
First party viewers mostly seemed to like it:
CNET gave a second party writeup:
Then third party people started mischaracterizing it:
Finally, the Hill wrote a fourth party account, quoting these third party accounts, and that's what Washington DC saw:
Not everyone got it wrong; I think this account is closer:
But I encourage you to open up those tabs and go through them one by one to see a kind of pinball reflection of the tone of the talk. In microcosm it's an example of the emerging gap between Silicon Valley and DC, and gives a sense of how policy makers can inadvertently form their opinions from echoes of echoes. Doubly ironic and somewhat sad as we can use the internet to make direct connections between people these days. The good thing is that interested parties can see the primary source directly.
Sadly, consumer internet upload speeds haven't kept up with video quality. And these are only 720p, down-sampled from the 1080p source material.
Why the use of the word "she" ? I see a lot of articles written where the undefined person will be a she. I speak french, and "person" is a feminine name, so you can say about a person "she", but why in English ? Especially when in male dominated jobs like programming. A programmer is likely to be a he than a she, so why try this hard to be politically correct.
Brilliant! This whole section on choosing a language is great.
"One tends to think of a large system that has components in three or four languages as a messy hodgepodge; but I argue that such a system is in many cases stronger than a one-language system..."
This part sounds insane until you start working with eventually consistent messaging like:http://www.reactivemanifesto.org
Engineers have the power to create and sustain.
Thanks a lot for this!
For everyone else, go ahead and try to read things titled "How to Be a Programmer" but don't expect it to actually help you, you know, BECOME one.
No, seriously: renting a 2 bedroom flat in a not brilliant suburb of London costs around 25,000 a year, or US $40,000. Then you can add council tax (another 2000), water, electricity, and gas bills, and travel. Upshot: the fixed costs of living in London are on the order of US $50,000 per year (two beds) or around $40,000 per year (one bed). Note that I focus on the two bed option because that's the practical minimum for a family unit, or for someone who telecommutes from home. Note also that the average gross income in London is a little under 28,000 per year (before tax).
Upshot: normal people and normal families can't afford to rent in London any more. The only thing propping up these insane prices is the scarcity induced by the current bubble in the foreign investment housing market. The crash, when it comes, is going to be epic.
Its a hard lifestyle - by about the 10th of these flights you will be sick of the security hassles (and RyanAir) - but it was way better than living in London full time (no offense).
I did it for 18 months before finally burning out on it and moving to a full time remote position (which paid less but I decided that that was worth the upgrade in lifestyle).
I rented a loft for 6620/year. It's mint condition and it's in the outskirts of Barcelona. I'm 12 min in subway to the plaza Catalonia and in 7 min using the train or 20 mins in bus. To be honest, I would never never again will rent in the centre of the city. It's expensive and all buildings are antique, without the proper commodities.
If you want to come Barcelona, check the outskirts, get a scooter or enjoy the Barcelona transportation system. It's wonderful.
I want to add some more info about living in Barcelona.
The weather is magnifique. It barely rains all the year. You can go mountains withing 2h car travel if you want to enjoy the snow in winter.
Eating can be really cheap IF you go to the supermarket, buy all the meals and cook yourself like I do, I saved 300/month doing this instead eating outside. If you can compile rails, you can be a chef, :). I do buy the meals and stuff for around 90/month. That includes the 40lts of water i buy. Then daily i try to buy meat, fish or vegetables for the week and it cost me no more than 220 month.
I pay 90 euros electricity, 30 gas and 40 water every 2 months. 60 euros for 100mbit fiber connection + phone and mobile and that's all.
Remember that 1 bed flats are especially in demand at present as a result of the (in)famous bedroom tax. A single person or a couple are only allowed 1 bedroom if they need to claim housing benefit (unemployed or low-wage, and remember that in London 'low waged' is a pretty high threshold, e.g. teachers, social workers, retail staff, bus drivers &c).
Bear in mind that building 1 bedroom flats has (hitherto) been regarded as a waste of money for housing associations or councils, so that really only commercial lets are available (at usually twice or three times the rent of a HA/council flat with 2 beds), so, ironically, the tax payer will be paying more to move couples out of 2 bed high rise flats in rough areas which are hard to let into expensive private let 1 bed flats. There will be no takers for the high rise flats (unsuitable for children) so they will be mothballed then expensively demolished.
Yes, bonkers, but the UK is run by the Daily Fail and other populist idiots.
Edit: OK anonymous downvoter, state your reasons
So you spend 128 hours per month commuting, to save 387. Not what I would call a bargain.
Oh, what about double taxation? I'm pretty sure you'd be hit by that and that would most likely put you well in the negative.
There is, for instance, a flat steps from the London Overground in South Norwood (http://www.zoopla.co.uk/to-rent/details/30891717) that is going for 400/month (470/month).
(It's unclear why you'd want such a long commute, versus living in far-away green suburbs by the train.)
Similarly, it would be cheaper for me to rent in Mexico City and commute to my job in Los Angeles. Yes, some global metropolises have lower rents than others.
Who cares if you live in Barcelona if you're asleep all the time?
There are two arguments that are typically given.
Firstly, you want to encourage people of different incomes to live together. I don't believe that this is a worthy goal. It's not clear that the benefit to people on low incomes outweighs the loss to their high income neighbors. And the richest 1% always find ways to isolate themselves anyway.
The second argument is that welfare should taken into account the cost of living. I also believe this within reason, but the welfare system already does this in many ways. In fact, London's "one bedroom rule" is a clunky way to do precisely this: it lets people live where they like, but prevents people from purchasing an excessive "quantity" of housing.
Sure, Birmingham is not Barcelona or London, but I'm not sure how you'd enjoy them by living most of your off-work time in a Ryanair flight.
I decided that I was willing to pay a premium for my <25 minute commute to work (close to Tottenham Court Road). And as long as other people think the same, rent will go up. Pretty standard supply and demand. Everyone works in the center, and nobody likes to waste 2 hours of their day hopping trains and buses.
So this is what you get, take it or leave it, I guess...
Also, from this article it seems that rents have actually fallen, because in 2007 I could not find that kind of accomodation for that price and I was strugling to save any money compared to now.
I guess the housing market collapse in Spain has actually impacted the crazy Barcelona prices of mid 2007.
30 return flights and it's faster than Barcelona, 1hr flights (and you can show up 45 mins before flight leaves for IE->UK).
Even using Hotel Tonight while I was in London, accommodation was 200+ on a Tuesday night.
The main cost that was ommitted that would give us an idea whether the commute is worth it is the opportunity cost (http://en.wikipedia.org/wiki/Opportunity_cost). While it'd be difficult to estimate how much the author's time is worth, if we assume that he/she gets paid an hourly wage of W, and it takes H hours to commute to and from London, then the opportunity cost would be something like W x H. If that opportunity cost is greater than the 387 in savings, then it would not be cheaper to commute from an economist's perspective.
Sure, there's no council tax and utilities are cheaper. Still, its pretty close.
I bet Paris ain't so far from that either, and let's not talk about NYC.
Basically, all of the large tech cities prices are "batshit insane".
The only hope I see, barred 1h30/2H by plane travel time, as the author suggests.. is remote work whenever possible.You can then live 3-4h away from big cities (so you can still get together if needed), and prices are slashed by 10.
Why would you live in Central London? You can commute for an hour into Liverpool St and get much cheaper rents.
- from quality angle?
- from price angle?
Given the option of the hassle and commute, people would prefer to just live in London. Which is the whole point- real estate pricing is efficient in this case.