Still I like the idea. This is something that should be covered in a CS 102 type course. I know way to many cs guys who have no idea how to debug, let alone how their is being implemented.
Is this ever an real issue, even on any embedded system in the last 20 years?
Moreover the difference between 1.05 and 1.06 nm drastically affected freezing point. I don't think these can be built, en masse, to such precision.
Even the name is cool: "water wires".
So beyond fiber optics we could have water wires? This should help photonics researches make a (faster) computer with light, right?
My favorite "invention" of that form is the water heater "booster" ball. Basically you take a kilogram of spent fuel rod, encase it in a austenitic stainless steel ball, and suspend that bad boy in the center of your water heater holding tank. Hot water for the next 500 years without using gas or electricity :-).
// A team of physicists and chemists from the University of Bristol have grown a man-made diamond that, when placed in a radioactive field, is able to generate a small electrical current.//
When I read the "small electrical current", the physicist in me was naturally assuming a very small current of the order of pico to nano amperes (without being conservative) - essentially useless. Metrics like the half life of the battery doesn't make any sense at all if the power or current rating is not specified. Current rating is something you could trust, that it will end up as a viable product.
This would appeal more if they can give a direct link to their research paper.
"There are so many possible uses that were asking the public to come up with suggestions of how they would utilise this technology by using #diamondbattery.
It's cool how they're engaging the public in their research in this way. Of course it's a transparent ploy to get social media mentions, but scientists of all sorts (and I kind of am one myself) would do themselves a favor by doing more to get the general public to know about their/our work. This particular thing 'feels' sort of slimy (to me at least, but I suspect to others on this site as well), but I think that's a bad reflex on my part, and that easy, low-friction things like giving people an opening to send a quick tweet or FB post reaches a different audience than having a stand at the 'open science fair' or having a lecture open to the general public; those tend to self-select in the audience they attract, to put it mildly.
While diamonds are extremely hard, they are brittle, and shatter relatively easily.
Could be used in underground seismic monitors https://twitter.com/Keminoes/status/803297734694465541
Car keys that require batteries. If we don't have a spare battery with us and our keys die, we're stranded. MP3 players, too https://twitter.com/tysongeisler/status/803438384052129796
To power a low-power-mode Arduino for gas detection (CO, CO2, Low O2, etc), fire detection, radiation detection, etc. https://twitter.com/noalear/status/803389146433626112
Could diamond batteries be used to power medical nanobots? https://twitter.com/weirbe/status/803418222972125184
Power clothes that contain sensors, as well as clothes that electrically regulate the temperature. https://twitter.com/CIMCloudOne/status/803440800759574528
how about in cell phones to give a little charge to the battery while the phone is not in use. https://twitter.com/W_Haas/status/803352403512872961
Is there the potential to power watches as a fair amount of waste is generated from depleted watch batteries each year. https://twitter.com/Merrett72/status/803297788163289088
Just think in terms of domestic terrorism and dirty bombs. Exploding a few such batteries would release radioactive powder, which is a quite dangerous if inhaled, and cleaning it up is very difficult.
But I'll say this, the man/woman who creates a company that builds a new revolutionary battery that will keep your laptop humming for a week; will be very rich. Probably the next richest person on earth.
So, they are replacing the thermocouples as used in RTGs with this diamond-like stuff? And low radioactivity sources (as opposed to plutonium)? That would indeed be revolutionary if it worked.
Using numbers from patch_collector:1 x 50watt lightbulb / 0.0013 Watts/gram = 38Kg = 83lbs of diamond per bulb.
Diamonds are far from indestructible. They shatter. More importantly, they burn. We don't see it very often but put enough of them together, add heat and electricity, and you better hope there isn't any oxygen nearby. Imagine a couple pounds of these things, on fire, pumping out radioactive carbon dioxide. At least when uranium burns it produces something heavy that can be filtered. Filtering radioactive CO2 would be a nightmare. These things will never find there way into any consumer product.
1) Have an ionizing radiation source.
2) Get a robust dielectric, with metal plates on both sides of it. Call the plate near the radiation source the cathode and the far one the anode.
3) As the ionizing radiation impacts one side of the cell, the released electron travels only a short distance through the cell. Ideally, the first interaction is with a cathode and the far side an anode, imparting a negative potential to it.
4) A potential difference now exists between the far-side anode and the near-side cathode, from which you can draw current.
Strictly speaking, this is a more like a capacitor which is trickle-charged by the radiation field rather than a battery. There is a design tradeoff between thickness of the metal and dielectric layers in that the initial and final ionizing interaction must be of a (cathode/dielectric -> anode) or (cathode -> dielectric/anode) to get any energy out of it. Ideally, the dielectric has an extremely small cross-section for interaction with whatever the ionizing radiation source is, while the metal plates interact very strongly. However, there will always be some losses due to internal ionization of the dielectric itself. I'm not sure how you would build an effective multi-layer structure, either.
Is this preferable to an RTG? Almost certainly not. The vast majority of the radiation energy in an ionizing photoelectric-effect cell is still released as heat. Very little energy can be captured this way, so it is quite inefficient in terms of energy produced relative to the radiation emitted. You can extract some more electric work by pre-charging the cell such that the freed electron is slowed in part by the electric field gradient, but in practice the bulk of the energy is still lost as heat.
Sea story time! The internal electric field within the dielectric will produce some weird mechanical stresses, too. When my ship's reactor plant went through a refueling overhaul one of the things they replaced was the reactor compartment windows. There are a couple of leaded glass windows that allow Mk. 1 eyeball inspection of the reactor compartment during operation. They are quite thick, electrically insulating, and they build up a large internal electric field along with internal stresses. Standard maintenance was to replace them well before the stress could fracture the glass.
So... could the press release be ...excessively breathless? My guess is a qualified "probably". The physics behind the photoelectric effect are well known and understood. Diamond is special in part because it has a fantastically high dielectric breakdown strength. So, you can extract more work out of the ionized electrons by supporting a very high potential gradient between the anode and cathode, which could be seen as a breakthrough for this type of energy conversion device. BUT it also would enable a new generation of high-density ultra-capacitors. The energy density in a capacitor is limited (in part) by the breakdown strength of the dielectric, since the stored energy is a volume integral of the squared electric field strength. Since ultracaps are technologically much easier to commercialize, and much more investor-friendly than anything involving an ionizing radiation source, but that isn't what the press release points to, I think it is likely to be ... exaggerated.
I could also be wrong, and we'll soon have a new generation of ultracaps!
So all countries with nuclear power plants have to return/control the 'nuclear waste'.
It's really interesting/sad to read their second point for shutting down:
> pro users are more likely to want a larger number of integrations with new services and data sources, something thats hard to provide with limited revenue, which left the app close but not quite for many users
Because this is exactly what Zapier, the company I co-founded a year later, provides for free to other companies/products. Integrate once with us and automatically get integrations with hundreds of other apps (750+ and growing).
I love and use several Panic products (Transmit, Prompt, Firewatch) and hope this end-of-life enables them to spend more time on new ideas.
I do this irrational stuff all the time. $15 for a wedge of cheese I haven't tried before - yap totally fine. A $2.59 app - hmm, yeah, not sure, think about for 5 minutes, read reviews, seems expensive, what if I don't like it, pass...
I know it is ridiculous and I see me doing it, but it still happens.
The monthly fees could also fund development of more 3rd party integrations.
I found others to look so boring and didn't have time to spend styling it and making new widgets.
There's not a lot of overlap right now between "pro" and "Apple, Inc." Right now I'd reckon Apple sees the puck heading for the laziest leisure class the world has ever known.
> We fell into a trap. We thought marketing meant we had to come at people sideways to get them to give us money. Turns [sic] out, the games just good and people want to play it.
One small suggestion: you're not doing a great job on the up/cross sale. I went to your site and added the game to the cart. But after reading about the expansion I WANTED the expansion. Make it easier for people to add them on. Every large ecommerce store does this. It's helpful to your users, increases RPS, and (when done right) relevant enough to not be annoying. Make sure to find a non-popup solution that won't get blocked by ad blockers.
I'll be buying the product and an expansion based on the Amazon reviews: good luck making some money!
Is that for real? Is everyone else's experience with Facebook advertising similar?
Edit: since the game appears to be kid-friendly and non-partisan, maybe the company should consider a crowdfunding campaign to buy back the extra stock and send them to schools for free.
However, after flying me and my co-founder to China, living there for a month to work out all the kinks in the process, then the incredible cost of shipping and customs taxes (has to rush ship for Christmas of course), I'm thinking the U.S.would have been a better choice (at the small volumes of 1000 per batch we were doing).
Now, if you are making 50,000 or a million of something, then China is definitely the way to go. But for small batches I recommend shopping around U.S. factories... They're pretty desperate for it at this point.
As someone who's spent their professional career working for venture-backed tech companies that hadn't yet made a profit, that sounds damn good. Many many companies never get that far.
I also have an issue with the Big Lesson of "order at most two times what you've already sold". The lesson shouldn't be a rule of thumb. The lesson should be to find methods to gauge interest better, not just "2X maximum".
Anyway, thanks for sharing. I obviously thought it was an interesting read and it got my brain thinking.
You're throwing money away. I searched "presidential debate game" on Amazon and you're the #1 (non-paid) result. Why are you paying for clicks on a search for which you're the only relevant result?
Cheers, mate! I hope you become very rich.
I'm about to order one, and sent a link to about 10 friends. Good job!
If you get over to games expo in the UK I will have to buy you a beer
I don't really get Kickstarter. The above sums up a lot of my concerns about it and why I have yet to look into it in a serious way.
Is this a typical result with facebook ads?
if you just made the kick starter amount we wouldn't have to read this, however now the publicity of your failure will bring you more sales...
1. Read and get a basic understanding of the material before lecture.
2. Attend every lecture and take notes very few if any notes. Most of what you need is in the book and math is about understanding. You cannot grok and write at the same time, and it's better to try to grok while an expert is explaining it to you.
3. Do a shitload of problems / proofs, depending on the class. Be honest with yourself when you don't fully understand something and stick with it until you do.
Math is different from other subjects, and you need to treat it that way.
1. attend all the lectures
2. attend all the recitation sessions
3. do 100% of the homework, and on time
4. make sure to understand every homework question and how to solve it.
5. write legible notes
This was good enough to get a baseline B. To get an A, you had to put in much more work.
I know this stuff seems obvious. But it took me a while to figure out I couldn't just coast and wing it like in high school, and many other students had the same experience.
I think it's important to get the "mechanical" work, e.g., step by step derivations, so it can be done quickly and accurately. For me, this is also resulted in committing the definitions, axioms, theorems, etc., to memory. During exams, being able to speed through this work gave me the spare time I needed to sit back and think about each problem, especially the one "surprise" problem in the set.
Oddly enough, I kind of treated the humanities courses in a similar way. I signed up for courses that were known to be graded based on mostly written work -- essays, papers, etc. Because all of these courses involved the same "mechanical" work, e.g., writing a paragraph supporting a concept, I also got very quick at it, which compounded itself in terms of getting assignments done quickly and writing fairly lengthy, coherent answers, in blue-book exams.
Note: It also helps a lot to actually enjoy the stuff. Attending the lectures helped me later in life, as I observed the teaching styles -- good and bad -- of my teachers. That experience has helped me with presentations and other kinds of interactions in my regular job.
* Watch video's on Khan Academy/youtube* Use Wolfram Alpha to confirm/deny yr solutions* Get a tutor who can explictly show you how they approach solving any arbitrary problem you bring them* Accumulate a list of math tricks that you can use and how to use them (example: how to long divide polynomials)* Print out old exams and pretend to take them ~3 days before final* Rewrite your lecture with each example problem on it's own page(s)* Create an index on your lecture notes so that you can quickly identify what type of problems and topics were covered in each lecture.* Identify yr strenghts - we can't all be good at everything - knowing yr strenghts will give you a foundation of confidence to build upon* Queue cards for memorizing all those trig identities* Expect to spend a lot of time - this is not always possible
1. Skip lectures because the professors barely speak english and can't teach
2. Barely rush through the homework before its due, it doesn't matter if you understand it as it has no relation to the rest of the class
3. Find last semester's exam and study that for a morning, as the questions are pretty much the same.
Its a shame that the higher level courses that should be more interesting and useful end up being the most painful and disregarded.
Source: I actually did this and it improved my grade 55% in Calc I.
This list reads like my strategy to do well in a math class without learning the material (literally; this method got me through numerical methods and advanced calculus, both classes that I had no interest in and took only to satisfy the degree requirements).
My thoughts on the particular points:
>Keep a list of THINGS TO MEMORIZE.
If this list is anything but empty you should feel bad. "Memorizing" something in math is a way to act (and test) like you understand it without understanding it. The one exception is in computationally heavy classes, where you memorize the completed solution of common forms. However, if your goal is to understand the material, you should be able to derive everything you are memorizing without thinking. The memorization is only to save you time on the exam by skipping the computation. Having said that, as I mentioned above, if you do not understand something but still want a good grade; memorize it. The professor will never see the difference.
>Watch for example problems.
Because these will be the problems on the test.
>Know how to do every homework problem assigned!
Yes. Also, know how the book/teacher wants you to do the problem. It is often times hard to write a problem that can only be solved one way, so it is sometimes possible to avoid using a method you don't like or understand.
> Start the homework at least a few days before it is due.
Yes. Also, if possible, consider homework due at the last office hours before it is actually due. Otherwise, you can't go to office hours for help.
>Keep a running list of HARD PROBLEMS.
Good advise if you are going the memorization route; otherwise, this is just memorizing the solution to a particular form of questions. Also, in my experience, if you are going for the understanding route, this list just does not stay relevant long enough to justify keeping it.
>When you get your homework back: Look over the things you got wrong.
Always good advice. Having said that, if you got a problem wrong (for reasons other than computational/algebraic mistakes), the bigger issue is that you thought you got it right. This means that you not only did not know how to solve the problem, but that you have misunderstood some concept that you need to learn.
>Find a quiet place, set a timer for the amount of time you'll have in the exam, and take the practice test. Don't look at the practice test before you do this.
Good test prep advice in general.
>If a problem is hard, skip it and come back later.
I triage problems much more aggressively. If a problem looks time consuming I skip it. If a problem looks like it involves thinking, I do enough work to verify that it actually involves thinking then skip it.
>Do a quick check of each problem to be sure your solution is reasonable. E.g. if the problem asks for a distance, is your solution positive?
Do this check after you finished the test. If you got something wrong, but didn't finish the test, then knowing you got it wrong doesn't help; you still didn't have a chance to correct it. Also, you are more likely to notice an incorrect answer after spending time away from the problem.
Having said that, sometimes you answer "feels" wrong as you are solving it. If this happens and you see where you went wrong, correct it. Otherwise, complete the incorrect solution (if feasible), and mark it. This gives you the chance for partial credit; and sometimes your feeling is just wrong, and the answer is weird.
>Write SOMETHING on every problem. The grader really wants to be able to give you some partial credit.
If you have time. If you really have no idea how to approach a problem, then your time would be better spent doing better on the rest of the exam instead of producing a plausible looking solution. Mark these problems, and, if you have time at the end, come back and look at them again.
Having said that, this is still good advice. If you think you know how to solve part of the problem; do it. If the part you know how to do requires you having computed something that you do not know how to compute, then clearly write "let a = thing I can't compute". Graders don't have time to look closely at your answer; make it easy for them to give you partial credit.
If you are answering a proof based question, and cannot figure out how to prove a particular fact that you need for your proof, consider writing "it is clear that". You will be amazed how often this works.
>When you've tried everything, go back to the problems worth the most points first.
Triage. Go back to the problems that you think you got wrong and can improve first.
>Given time, double check your algebra carefully!
If possible, verify your answers using a different method. For example, if you are asked to find the integral, verify your answer by taking its derivative. You are less likely to make the same mistake, and for many problems it is easier to verify an answer is correct then to find the answer
>After the Exam
Write down what you can remember about the problems you could not solve (including ones where you put down something that might be correct, but were not sure about). Solve these problems (using book/notes/TAs/etc if needed).
When you get the exam back, compare it to the list of problems you knew you didn't get. You don't care about these problems at this point; you already knew you missed them and worked through them. You learn nothing new by the grader telling you that you missed these.
The problems that you did not know you got wrong are where you should focus your attention. If you just forgot about them, then work through them like you did the ones you already knew about. If it was a computational/algebraic error, don't worry about it (but do do more practice if these errors cost you a significant amount of points). Pay special attention to problems that you thought you got right but didn't. These highlight the areas where you have a misunderstanding of the material.
HARD PROBLEMS list:
As you might have noticed, I don't like this. What I do like is a concepts list. When you go to study, read through the list and make sure you understand all the concepts. It should be small enough that there is no point in creating a separate list for hard concepts; and what you consider to be an easy/hard concept will change as you get more practice with some things, and never see other things between the first month of class and the final.
Studying for the test:
As you probably noticed, my opinion is that many of these points are techniques to study for the test. This is a valid thing to do in school (after all, you are graded on the test, not understanding), but be aware when this is what you are doing. If you plan on taking another class that builds on this one, then this method will come back to bite you then.
In class notes:
Do not make them. The only reason you should be writing during a math lecture is to use the paper as a scratchpad to think about what is being said. Anything else is a distraction from the lesson. All the material should be in the book. If it isn't, you can ask the professor for a copy of his notes. If you want your own notes (which can be a good idea), write them after class. If you do take notes, keep them in a separate notebook. Interspersing them with problems and scratchpads just makes it more difficult to use them.
If you are taking math as a gen-ed requirement, ignore this. If you are going into a math heavy field, learn LaTex as early as possible. Write you notes in LaTex. Do your proof based homework in LaTex. This will pay off down the road, when you A) know LaTex and B) have a digital record of you previous math classes. Plus, turning in proofs written in LaTex make them feel far more correct, so you might get graded easier.
Form a study group of people with similar skills as you. If you try studying with people far better than you, then you will end up just having them give you the answer; in which case you are better off talking to a TA (who is probably better at explaining things). In a study group, you want to be part of finding the answer. Along the same note, if part of your group just gets the answer (and you are in the part that does not), start by talking with the other people who do not know the answer. That way you can figure it out together, instead of just being told what the answer is.
I haven't used Windows for years now, so the details are a bit fuzzy, but it essentially worked like this:
Start the machine. During boot(when you see the orb splashscreen), turn off power or hold down the power button for a few seconds.
The next time you boot up the machine, windows will say it failed to boot and offer to go into startup repair. Do that, wait for some time, and click through until eventually you see a bug report that you can open up in notepad.
Once you are in notepad, open up the "open file" dialog. From there, navigate to "C:\Windows\System32" and replace "sethc.exe" with "cmd.exe". Now, reboot normally.
Once you reach the login screen, spam left shift until you get a command prompt with admin privileges. Now, you can create new users, change the password and privileges of existing users, or even start up explorer.exe and use the computer normally as admin, bypassing the login screen entirely.
This works because "sethc.exe" is the executable responsible for Sticky Keys, which is activated by pressing shift repeatedly. Instead of sethc.exe, now cmd.exe would be run instead.
On the other hand, if MS pushes the update to the PC and it self-launches or can be initiated by a non-administrator, then it seems like there is a real security problem here.
The number of Macs I've unlocked by creating a new admin by removing the "install is finished" file in single user mode is in the teens.
In my case it was either that the language pack was wrong: Eng UK not Eng US, neither of which actually have language pack installed... or it was the Win toobar/menubar being docked to the left of the screen and not the bottom. One of these stopped the upgrade completely, repeatedly. The greatest security risk had to be getting stuck on an old version of Windows with no good info on how to fix a 2 year old bug in the upgrade process.
> Combined with other significant security advances, such as Credential Guard, Windows Hello and others, weve made Windows 10 Anniversary Update the most secure Windows ever.
There must be an option to stop full automation of upgrade process or MS can just recommend disconnecting from network while upgrade is taking place.
MS does it for connivence I assume, so people aren't promoted while upgrade is taking place. This is my presumption, I may be wrong.
Good advice in general for almost any software.
And to those who think I am derailing... http://news.softpedia.com/news/microsoft-wants-all-linux-dev...
"Individuals dont in fact enjoy being evaluated all the time, especially when the results are not always stellar: for most people, one piece of negative feedback outweighs five pieces of positive feedback. To the extent that measurement raises income inequality, perhaps it makes relations among the workers tenser and less friendly. Life under a meritocracy can be a little tough, unfriendly, and discouraging, especially for those whose morale is easily damaged. Privacy in this world will be harder to come by, and perhaps second chances will be more difficult to find, given the permanence of electronic data. We may end up favoring goody two-shoes personality types who were on the straight and narrow from their earliest years and disfavor those who rebelled at young ages, even if those people might end up being more creative later on."
Pervasive employee monitoring and feedback isn't costless. Some people will improve, others will get fired/quit find a new job, but there will be some who cannot take it at all. If losing a job wasn't so punishing economically and status-wise, it would take a lot of, but certainly not all, of the sting away.
- Equity vesting schedule is 5%, 15%, 40%, 40% over 4 years
- Relocation package is prorated for TWO years. If you leave after staying for a full year, you still need to return 50% of it.
- 401K matching only vests after working for 3 years. If you leave within 3 years, no matching for you whatsoever.
- No tuition reimbursement. Want to get a part-time masters in CS? Pay it yourself!- No catered food. No free soda. No free snacks. If you are hungry, you can eat at one of the shltty cafes.
- Obnoxious oncall routines. You are woken up 3:30am waiting for the event to be over. Why not automate things? Because replacing people is cheaper than building great software!
This is Amazon's mindset TOPDOWN. The root of the problem is that the leadership does NOT care about employees or technology. This is a retailer and a powdered Walmart, what do you expect?!
SDE 1 and SDE 2 are simply the slaves working at a sweatshop. Some of my co-workers are hired without onsite interviews. They do some video chat and they are hired at Amazon. They don't even know how to write bash scripts. Our team used to have technical program managers who can't even write a Python script. With simple things like running a command line tool, he cuts a ticket and let the engineers do it.
The managers at Amazon pocket bonuses and don't give a damn. They don't carry pagers and when they do, they just page lower level employees. The only reason people take offers at Amazon is that they can't get better packages from Facebook/Google.
* I worked at AWS for 2 years.
Something needs to be done to help people financially who are looking for a way out from the abuse.
However my nephew didn't have such a fun time. He was working for one of their warehouses in Kentucky and they were ruthless to the workers like him. They had a snow storm, he got stuck in the snow and instead of being understanding they reprimanded him for it. He liked the pay but couldn't take the humiliating treatment, so he quit.
When thinking about an employer, above a certain size threshold, never judge a company. Always judge a department. You don't work for a company. You work for a department. Above a certain (fairly small) size, the only thing you'll share with the employees in the other departments will be the domain name in your email. Everything else will be coincidental.
Having escaped from an abusive manager myself, I can imagine what this person went though. Managers that are skilled in the art are able to inflict pain without leaving much of a paper trail.
I did ask for (and got) professional help, including medication. There's only so much stress 24/7 that you are able to handle before you start to crack. Who knows what would have happened if I just tried to ride it out.
I'd have gone bananas if I had been placed in a PIP instead. This was one of the possibilities identified by my branch predictor, so I was collecting a mountain of evidence against said manager. Thankfully, it wasn't needed.
(I realize that nowhere in the article it says a manager was the issue, but corporate pattern-matching gets pretty good after a while)
PIPs are bullshit, and fundamentally degrading. Just tell people "Maybe it's your fault, maybe it's our fault - but either way, it's not working out", offer a (truly decent) severance, and move on.
(I know, I know, I know: "because laywers.")
Aside: That is a testament to the resilience of a body. The physics behind that fall would be astounding to analyze! I come from a long line of suicidal people we're not jumpers, but swingers.
Does anyone else do this?
The facts are: a guy put in for a transfer, got put on a performance improvement plan, threatened self-harm, and then jumped off a building.
There are no details why he requested a transfer, the reasons he got put on a PIP, and if he was mentally unstable or not, where these fairly common life events would cause him to contemplate self harm.
Nope, the pitchforks and the torches come out.
Everything I've tried has been absolutely horrible except for "An Introduction to Error Analysis" by John Taylor (yeah the classical mechanics guy). Unfortunately it's a bit basic...
The main advantage of compilers is that the optimizations scale across a large codebase through inlining for example.
Also, just moving from Sandy-Bridge to Haswell for example can have significant performance swing (in both direction). The maintenance cost of the assembly is again a scaling issue.
If you have a single function that takes a significant amount of time in your program, and performance is critical, of course you can try to go with lower level. But it is likely that it will be more profitable to start with 1) pre-optimized libraries (i.e. don't write your own "sort") ; 2) follow the optimization guidelines of the CPU vendors regarding memory layout, etc. ; and 3) start with vector C-level intrinsic if possible if you can benefit from vectorization.
Here are my results:
sort_asm_recurse.asm 69 ms/loop clang++ 3.8.0/sort_cpp_recurse.cpp 65 ms/loop g++ 5.4.0/sort_cpp_recurse.cpp 70 ms/loop
So on my computer, the assembly code (barely) beat g++ but not clang++. From a cursory glance of the assembler code clang++ generates, the difference seem to be that it adds alignment to critical loops.
It is also smarter at using 32bit registers when it can get away with it. F.e the handwritten assembler code contains "xor r9, r9". An equivalent but faster variant that the compiler generates is "xor r9d, r9d".
There is also a slight error in the assembly code. rsp should be aligned to a 16 byte boundary when a call instruction is executed and the code doesn't ensure that. Likely it loses a whole bunch of performance by calling from unaligned addresses.
Writing code is having a dialogue with the compiler, it can do better than you sometimes, and vice versa, but treating the compiler as a magic box that always spits out faster code than you is pretty silly.
I also wish I knew what optimization settings GCC/etc was using, and what effect tweaking those has.
I encourage everyone to write some assembly; you'll learn a lot. But use a compiler for your work.
I would seriously challenge anyone to try to, by hand, do what PLUTO+ does .http://dl.acm.org/citation.cfm?id=2688512It is implemented in at least one real production C++ compiler. The analogue would be graphite in gcc, and polly in llvm, but they don't have the full cost modeling it does.Then try to do it for multiple architectures or even different cache models (IE newer vs older processors).
Even simpler things than that, like deciding when it is profitable to add runtime vectorization/alignment checks, etc, is really hard by hand.Hell, in larger functions, i doubt people can even optimally do register allocation (including live range splitting, remat, etc).
So yeah, stupid quicksort, sure, you can beat it.
I'm not sure what it's supposed to prove?
If you restrict yourselves to small cases that are easily optimizable without any thought, and not amenable to any even slightly advanced optimization, yes, you can beat the compiler.
Is this the correct use of this terminology? I thought intrinsics were functions that allow you to tell the compiler to use particular instructions, specifically so you can avoid dropping into assembly. In assembly, wouldn't you just call them "instructions"?
It would be interesting to see the assembly output of all the compilers, and what the compiler settings are
Then there are some aspects that compilers might not optimize a lot for. I like this guide: http://www.farbrausch.com/~fg/seminars/lightspeed_download.p...
It's old, dated, whatever you want, but covers the basics.
edit: it seems that link got the "HN hug of death".
So obvious yet so mind blowing.
> Earth moves around the sun faster than Voyager 2 is traveling from Earth, the distance between Earth and the spacecraft actually decreases at certain times of the year.
We don't even have the technology to escape the sun's gravity without needing a gravitational slingshot from other planets, and that still leaves us with a space craft moving slower than the earth.
And no I'm not expecting to see them, just to think about them in the right direction.
And only a couple of years of useful life remaining.
Though really cool anyway, I was going to say that Voyager 1 managed short of 30 light seconds a year until I realised the question is one of acceleration and deceleration not velocity. If I wasn't on my way to bed I would dig around for some data to make acceleration graphs for them (hint hint :P ).
A guy writes about his brush with fame when MW was rude to him on Twitter.
I really wish this mind-set was more common.
PocketCHIP - http://getchip.com
LameStation - http://www.lamestation.com
Gamebuino - http://gamebuino.com
Meggy Jr RGB Soldering Kit - http://shop.evilmadscientist.com/productsmenu/tinykitlist/10...
I bought for my son, the piper (minecraft+scratch) - http://playpiper.com and
there is also Kano - http://kano.me
The GBA goes for similar prices, and has similar benfits (and yes, it's got a solid toolchain). Plus, it plays all the old gameboy games, so all of the above applies (although LSDJ is a royal pain to use on the original GBA, as it assumes a standard GB button layout. Nanoloop has a version explicitly for the GBA, which takes advantage of the new sound hardware, but Nanoloop carts are very expensive).
...but with buttons that small and close together, for me it would be painful to play for any long period. Heck, I didn't play my gameboy color for long because it's smaller size caused my thumbs to ache after a while.
Maybe my hands are just too big.
Ive found PICO-8 to be just limiting enough to still offer excellent games, but this seems a bit more gadgety.
Tbh the killer features with any of these retro consoles would be to have a good button feel. The buttons used on these are usually pretty weak. I would be much more excited with a system with a GBA d-pad and buttons.
That and maybe something a bit more powerful than an Arduino. The magic of PICO-8 with Lua is that your first lines of code are about the game, not memory book keeping.
That's almost 70 Canadian dollars. I was expecting something around $19 USD.
In all it's shape or form, this is a great idea, credit card sized handheld gaming, can be put forgotten until you realize your phone is out of battery, and you suddenly have nothing to do. Also, I just love the small screen with it's tiny controllers, I can see myself playing Pokemon Blue on it.
Oh well... missed opportunity.
I agree with others that you probably don't want another embedded Linux, but a M0 or M3 would have been nice.
Obviously, I would be overjoyed to be proven wrong.
"Arduboy, the game system the size of a credit card"
Or is this tutorial sufficiently stand-alone without React (web) experience?
Thanks for sharing your work!
The biggest pain point so far is adding analytics but otherwise, everything is pretty smooth sailing.
Also I come from a web developer background, so structuring layout isn't too unfamiliar. AMA if you have any qns. I'm looking to pour all these experiences into a blog post before my next distraction comes and terrorizes my short-term memories
I'm already familiar with native coding, but not sure how much better RN is than say PhoneGap.
SCRIPT438: Object doesn't support property or method 'findIndex' File: bundle.js, Line: 2, Column: 26381
Question - is it possible to make a third-party keyboard in react native currently (the kind you download an install from the app store)?
The ME is purportedly placed in "recovery" mode:
According to Nicola Corna, the current ME state should have been changed from normal to recovery.
I don't speak for the FSF, but it sounds like this is as close to an FSF RYF certification as any Intel CPU is going to get. FSF approval of a device requires that all user-modifiable software be Free Software. Previously, no recent Intel CPUs could be FSF certified as "RYF" because the ME chip would shut the system down after 30 minutes. (Side note: no recent Intel CPUs can be considered "stable" without microcode updates which also violate the FSF's RYF guidelines.)
flashrom -p internal -r bios.rom ifdtool -x bios.rom python3 me_cleaner.py flashregion_2_intel_me.bin python2 dump_me.py flashregion_2_intel_me.bin -x python2 me_sigcheck.py FTPR_part.bin ifdtool -i ME:flashregion_2_intel_me.bin bios.rom exit # Skip this line if you're okay with bricking your motherboard. flashrom -p internal -w bios.rom.new
ME: FW Partition Table : OK ME: Bringup Loader Failure : NO ME: Firmware Init Complete : NO ME: Manufacturing Mode : NO ME: Boot Options Present : NO ME: Update In Progress : NO ME: Current Working State : Initializing ME: Current Operation State : Bring up ME: Current Operation Mode : Normal ME: Error Code : Debug Failure ME: Progress Phase : BUP Phase ME: Power Management Event : Pseudo-global reset ME: Progress Phase State : 0x3b ... ME has a broken implementation on your board with this BIOS ME: failed to become ready
RISC-V can't take the market over fast enough.
Surely, at least 1 Intel staffer reads HN and they must have discussed this internally.
Unless they just brush this off as negligible (a couple thousand paranoid/"extremist" users) ?
* A set of software tools: Check
* Unauthorized user: Check Caveat: user is not authorized by you, but by someone else (Intel)
* Gain control of a computer system without being detected. Can access your machine while it appears to be "powered off" but plugged in. Has full access to RAM. Can draw undetected on top of screen. Can read screen. Check.
So. Does this qualify the Management Engine as a rootkit? It meets the definition. Just because the rootkit is installed by the manufacturer doesn't make it less of one.
+1 for the use of ifdtool.
In hindsight, I am glad that I studied math at Stanford (also I ended up doing something completely different: marketing). It pushed me to think deeply and patiently about a problem at length and taught me how to mix intuition with analytic rigor.
I believe it was Paul Graham who said that math was one of the better subjects to pursue in school because it's one of the most difficult. I kind of agree with pg on this one: I was nowhere near the top of my math classes, but whenever I took theoretical CS or stats classes, I found them shockingly easy and did well with much less effort.
What are we looking here for someone that wants to learn all this while keeping a full-time job? 3 years?
I wonder how the Rust version compares with plain-jane C.
-Austin (murmurhash guy).
The first line, "x x 32", might have a typo? It's actually assigning (x XOR (x >> 32)).
With "x px", p is a fixed prime number being multiplied by x.
BTW, the hyperlink for "reference" in the "Specification" section is broken.
> [W]e did not find any unexpected impediment to deploying something like NewHope. There were no reported problems caused by enabling it.
It's great to have this data.
Minor question: I assume CECPQ1 stands for something like Concatenated Elliptic Curve w/ Post-Quantum #1, right?
Bigger question: will there be a CECPQ2 experiment? I really hope so! Based on how CECPQ1 was constructed (X25519+Newhope), and how this experiment was executed, I'd love to see Google continue playing an active role in PQ experimentation.
Any good "Cryptography Engineering"-style post on this NewHope algorithm explaining what it does and its limitations? Any reason not to get excited about this being done in a practical application?
> ... he immediately responded that when he taught algebra courses, if he was discussing cyclic subgroups of a group, he had a mental image of group elements breaking into a formation organized into circular groups.
Jacque Hadamard once conducted a study of how mathematicians approach their work, which can still be found in a book called "The Psychology of Invention in the Mathematical Field". Here's an excerpt:
> Indeed, every mathematical research compels me to build such a schema, which is always and must be of a vague character, so as not to be deceptive. I shall give a less elementary example from my first researches (my thesis). I had to consider a sum of an infinite number of terms, intending to valuate its order of magnitude. In that case, there is a group of terms which chances to be predominant, all others having a negligible influence. Now, when I think of that question, I see not the formula itself, but the place it would take if written: a kind of ribbon, which is thicker or darker at the place corresponding to the possibly important terms; or (at other moments), I see something like a formula, but by no means a legible one, as I should see it (being strongly long-sighted) if I had no eye-glasses on, which letters seeming rather more apparent (though still not legible) at the place which is supposed to be the important one.
It seems to me that this is a surprisingly common approach to dealing with highly abstract subject matter. I first noticed myself doing it while reading Structure of Scientific Revolutions and later used it intentionally in working on math and CS stuff.
My experience with it so far leads me to believe it can be taught/strengthened. I've written a much more in depth essay on this in the past, but it's fairly unfinished. I still wonder about it though...
Sometimes this difference is a function of how information is displayed and how it reacts to the user. Terminals and REPLs are inherently oracles; spreadsheets connect our fingertips directly to raw information.
Other times its just a difference in the conceptual model that users construct. Some people think of Google as an oracle ("what is the weather like?"); others as a bicycle ("weather"). Those who bicycle around Google arent just better at Googling; they have a fundamentally different view of what they're doing.
This isnt to say that oracles are inherently wrong or that bicycles are always better, but theres a huge difference between truly augmenting a human and merely interfacing with one. Its important to know which idea is appropriate for any given problem space.
(most of this comment was ripped from here: http://joelgustafson.com/ideas/2016/08/25/oracles-and-bicycl...)
(see also http://mental.bike, which I don't know why I own)
Sussman's talk on how to think about circuits is here. Here are the slides. The video shows Sussman's talking head and him pointing at an off-screen display of the slides, which is not too helpful.
 https://www.infoq.com/presentations/We-Really-Dont-Know-How-... http://web.mit.edu/xtalks/Sussman-xTalk-3-2-16.pdf
I have seen various books and resources that come close (Edward Tufte books for example) but I think there could be much more work done.
There seem to be quite a few books on mechanical motion that are basically an encyclopedia of patterns that can be used to create new devices but I haven't really seen much type of this work done for computer interfaces.
If anyone has any good references please share.
Reading about this research makes me think that this sort of animal experimentation will certainly be one of those things.
In the neuroscience conference they also showed a DIY optogenetic fruitfly kit, but it's a pity the channelrhodopsin transgenic fly is not available outside a neuroscience lab.
Because in a bankruptcy judgments are given priority over debt, if you holding Theranos' debt you can try to jump to the front of the line by suing for fraud (etc), in the hopes that you can get a judgement in your favor.
So the real takeaway is, it seems all these companies have concluded that bankruptcy is inevitable.
Imagine that $15 million dollar invested in $50k USD increments into 300 startup ideas, the jobs that will be created, increased human capital and impact on their respective local economies, that are not there because of risk adversity (it still amazes me VCs would shun risk but still throw around "venture") towards people of lower class then them. For example, somebody coming straight out of university vs. the daughter of a CEO at a large company who also attends the same country_club/religious functions, suddenly looks like a safer bet.
It looks like a pretty hopeless situation for Theranos but it also exposes the crony capitalism that you'd find in other parts of the world with ease, with heavy hitters like Henry Kissinger (wtf) sitting on the board.
He will most likely never see that money again because he thought he was mitigating risk by piggybacking behind a powerful individual with powerful connections that's accessible by blood.
It's hard to feel any sympathy.
Colman et al v. Theranos, Inc. Docket: http://www.plainsite.org/dockets/3350guyw8/california-northe...
> Since its possible to hash multiple items to the same > indices, a membership test returns either false or maybe.
Bloom filters give no false negatives but a controllable rate of false positives. If a Bloom filter indicates that an item has not been seen before, we can be certain thats the case; but if it indicates an item has been seen, its possible thats not the case (a false positive).
This style of probabilistic data structure can be used to achieve a significant speedup when the full test for set membership is very expensive. For example, run the test using the Bloom filter. A negative result can be taken as truth, while only positive results will have to be tested against the expensive algorithm to weed out false positives. Obviously, the expectation should be that negative results are far more common than positive results.
> A single HyperLogLog++ structure, for example, can count up to 7.9 billion unique items using 2.56KB of memory with only a 1.65% error rate.
Incidentally, Redis has an implementation for HyperLogLogs that has an even smaller error rate of 0.83%, though it uses more memory if I recall correctly. :)
Fingerprint size- It allows fingerprints that are too short, basically less than 4 bits doesn't allow a reasonable fill capacity. The paper authors only hinted at this, but check out the fill capacity graphs on page 6. This could be why your inserts are slowing down around 80% level when in my experience it doesn't happen till around 93%.
Modulo bias - Your filter capacity code doesn't seem to round the filter size to a power of two. This is a simple fix, but without it your array indexes will be skewed by modulo bias, possibly badly if someone picks a nasty number.
Alternate bucket positions - Your code seems to do a full hash for calculating alternate bucket positions. I know the paper mentions this, but I haven't seen anyone actually doing it :). Its a lot faster to just XOR with a mixing constant. TBH that's what most libraries are doing... whether it's a good idea is debatable :).
Fingerprint zero corner case - I'm not that great at python, but I didn't see any special handling for the rare case that the hash fingerprint is zero. In most implementations zero means empty bucket, so this could violate the "no false negatives" aspect of the filter by making items rarely disappear when they were supposed to be inserted. Most implementations either just add one to it, but I prefer to re-hash with a salt.
No victim cache - Didn't look too much into it, but I didn't see a victim slot used in your code. This will cause problems when the first insert fails. The problem is, by the time the first insert actually fails, you've relocated a bunch of different fingerprints like 500 times. It becomes unclear which fingerprint you originally tried to insert, and you're left holding a dangling item from a random place in the filter that you cannot insert. This violates the "no false negatives" mantra. Even though the filter is full it shouldn't break by deleting a random item when the first insert fails. You either need to store this last item or figure out a way to unwind your insert attempts to be able to reject the item that originally failed to insert.
Check out my Java library if you want to see how I dealt with these things. Also I have a bunch of unit tests there that I either came up with or borrowed from other Cuckoo libs. Should be pretty easy to convert some of those to python :) .
That's not a throughput increase, that's a throughput decrease. Throughput is operations per unit time, not time per operation.
Obviously, if this is not a correct way to think of it, I'm open for more correct ways.
1. If an item in the Cuckoo filter needs to be moved, how does one know its other hash/location if only a fingerprint of the original item is stored (i.e. it can't be hashed again)?
2. [From the linked pdf] "Insertions are amortized O(1) with reasonably high probability." In case of a rehash, every item needs to be hashed and inserted again. This seems very expensive and impractical for data on Twitter scale, even if it only happens seldomly. Or are there any workarounds to mitigate this?
It seems like for most applications, silently degrading (instead of rejecting the insertion) when the bloom filter is above capacity is a super useful property.
Sometimes there is genuinely no way to find the perp, but a lot of the time, either the police are cowed into not really investigating too hard (when the perp turns out to be the son of a highish ranking government official, or wealthy business family, for example), or there is this kind of attempt to make things go away by paying-off the victims, police and anyone else necessary.
I would guess (but you never know) Huawei itself may have little involvement, other than providing the salary required to drive such an ostentatious vehicle, etc, in what is still a poor, developing (though thoroughly corrupt) nation (http://i46.photobucket.com/albums/f135/gavinmac/15181532_346...)
Also, I'm sure were this domestic that a returned-to-China Facebook would do the same thing.
my opinion on this story: some rich wanker tries to make problems go away with money and intimidation, not unusual. The worry is that there needs to be recognition and kudos for news sources that don't cave in.
In US also the mainstream media mainly sold out to Saudi money, tends to suppress news critical of Islam and the pseudo-liberal practice of intimidating any critique of the oppressive ideology of Islam in the name of racism, islamophobia and what-not.
If not for the Internet and social media, the ex-Muslims' voice would not been suppressed by the corrupt mainstream media.  
 https://www.youtube.com/watch?v=xDIR3GhXszo https://www.reddit.com/r/exmuslim/