Trading has been changing significantly since the 'big bang' when trading went from pits to electronic. From there on in you see the evolution of algorithm / program trading. This area has been using quants for decades at this point. There are a good few big names brands out there that are known for being 'algorithmic heavy', Man, Citadel, DE Shaw come to mind (I"m a few years out of date). That whole field has been open to introducing automation / algorithms to create a business edge and will probably continue to advance because its good for business. The profile of traders has also changed (Barrow boys versus PhDs)
Then I guess on the other side is investment banking such as m&a, equity and debt capital markets. Generally there its relationship based , juniors work on pitch books which from what I saw / heard were generally overlooked. This is potentially a lot harder to automate away. Then the bank would try to pull in some rain makers or grow them internally to land big deals. Usually these opportunities open up because their clients (Other companies) have learnt to trust the organization or at the least learn to expect a certain behavriour when enlisting their services.
Reuters was trading FX electronically since the early 1990s. At the tier one IB I worked for the IT budget was 500m USD a year (across products), and that was in 1997! Huge resources were thrown at automation. However, to this day, large trades in FX (> 10m USD notional) are still almost exclusively performed by humans over a telephone or over the bloomberg messaging system.
That's because, no matter how much you automate stuff, there is still the 1% "edge case" scenario where something goes wrong, and when that happens, you most definitely want a human that you can "look in the eye", when you have that sort of execution risk. Remember that markets move really fast and there is a lot of risk in big trades that "go wrong" because unwinding said trade will almost certainly cost one of the sides a fortune.
Also, high finance is not just about what you know. It's inevitably about who you know, about "illogical" factors such as salesperson charisma, entertainment, and most importantly, a credible personality type that understands the edge case risks. These things are very hard to replicate with a machine. You'll say they should be, that these things are unfair, but they remain a fact after many attempts at removing them have failed.
As for AI, let's for now call it what it is: machine learning. Learning from the past. That's fine for recognising stop signs at different distances, angles and degrees of noise. But in finance, the past is often misleading. Sure there's trend, but there are also very big instabilities in the historical correlation matrix. Paradigms shift without you even realising it. The constant is change. AI is not good enough at that, yet.
BTW, that's not to say machines are not making inroads. It's becoming almost impossible to get a decent trading job now with knowing at least R and Python to a comfortable degree, and good quant programmers cost a fortune. There's massive demand.
There were still humans in charge of the algorithms, but they moved more towards Python programmers than market traders.
Many of the "old-style" traders bitched about what we did, and most moved jobs to banks that were less advanced.
(I was in the interest rates line; typical trade size is $10M)
You mean I can get an unbiased look at a house in peace, compare the numbers, look at the plans, measure the humidity and do my due diligence without a sales person breathing down my neck?
Hell yea. I'd pay premium for that.
Answering my own question: I'd expect bankers to save more (of their own money) in anticipation of the good times not lasting as long. More conservative types will weather the storm and spendthrifts will get wiped out.
In other words: business as usual, up until the very moment it isn't.
Believing "I am special", is just built into us.
I don't think this is true. Not at all.
If your job can be done overseas, it probably will be outsourced.
If your job is algorithmic, you'll be replaced by a computer.
A lot of M&A activity though doesn't fit that category.
One thing I don't like about either editor is that they're web-based. While it makes a ton of sense on paper, I hate doing any serious 3D work in a browser window. Something like Microsoft's Language Server Protocol but for graphical editors would be amazing. Run the project in a browser window while having bidirectional flow of data between a native desktop editor and the browser window.
Unfortunately if you want to run something like Three.js inside of a native desktop editor, you'd have to embed a web runtime. That really balloons project complexity, so I can see why so many people prefer web-based editors when making web-based projects.
One alternative, at least for 3D applications, is multi-platform frameworks that also work on the web. Oryol in particular comes to mind. Hypothetically you could build a native editor around it, with no need to embed a runtime. The native editor's viewport would just use native graphics APIs for rendering. Then when you like what you see, just compile the same thing for the web. While some edge cases may not make it that easy, overall it seems to be a far superior workflow than having to deal with web-based editors or embedded web within native applications.
Unity 5 and Unreal Engine 4 both have incredible, native desktop editors that support exporting to the web. Unfortunately, they both have massive runtimes that make their web footprints a joke, among other problems.
Would be cool to render the scenes out! Has anyone (sucessfully) run Cycles through Emscripten? :)
I can't help but feel today's absurdly draconian copyright laws and lengths are going to make our century a terrible cultural black hole for our future-human descendents.
The vast, vast majority of creators who produce culture--books, movies, music, visual art, etc.--don't see widespread distribution, success, or fame in their own lifetimes. This is true today and it was true in the past too.
Today, we have people like Project Gutenberg, the Internet Archive, and others, volunteering their time and energy to lovingly archive, curate, and distribute out-of-copyright works. Many of these works get a cultural second life: people today can freely discover and read a book published before 1923, whose author might have died penniless and unknown in their day. Their work, their name, their ideas, and their legacy lives on 100 years after their death, even if the world they lived in overlooked them, because volunteers can freely update and distribute those works.
Can we say the same about the vast majority of work produced today, for whose creators don't attain widespread fame or cultural influence? Everything anyone produces, from a smudge on a piece of paper to the Great American Novel, now automatically gains a copyright that can last over a century, and that has almost never failed to be extended. As one of the many everyday creators who never see significant success, how would you feel if your work was not only ignored today, but for all time too?
Our future-human descendents are going to think culture from 1923-2100 consisted of almost nothing but the 400 comic book movies, the 75 Star Wars movies, and Harry Potter. Everything else will be too unprofitable for a megacorporation to distribute, and illegal for anyone else to.
Copyright law is ridiculous.Nonfiction and scientific work should be treated differently than fictional works. I don't really care if Mickey Mouse goes into the public domain. But it's crazy that 100 year old scientific works can still be under copyright and illegal to distribute. These objectively have value to society, and the argument for the existence of the public domain is much stronger.
And why on Earth should copyright last last so long to begin with? How many works are worth anything after 10 or 15 years? I believe 99% of all works make 99%+ of their revenue in the first few years. Having copyright last a lifetime, let alone much longer, is just crazy. Creators benefit exponentially less for every additional year of protection. And only the very successful ones even benefit to begin with - the vast vast majority of works are just forgotten by that time.
Put a cost on renewing copyright. This is actually how it used to be. Half way through, you could pay a fee to have copyright extended. Very few people paid this fee (because most works aren't economically valuable), so most works went into public domain much sooner. Journals charge $30 to access obscure ancient papers. But I bet they wouldn't pay even $30 to keep the rights to those same papers.
Don't put everything into copyright by default. And again especially works of nonfiction or scientific papers. If the authors want that, then sure. This wouldn't fix the issues with big journals that demand it. But it still seems like a sensible idea to have copyright opt-in, not opt-out.
"What books would be entering the public domain if we had the pre-1978 copyright laws?"
Harper Lee, To Kill a Mockingbird John Updike, Rabbit, Run Joy Adamson, Born Free: A Lioness of Two Worlds William L. Shirer, The Rise and Fall of the Third Reich: A History of Nazi Germany Friedrich A. Hayek, The Constitution of Liberty Daniel Bell, The End of Ideology: On the Exhaustion of Political Ideas in the Fifties Arthur M. Schlesinger, Jr., The Politics of Upheaval: The Age of Roosevelt Dr. Seuss, Green Eggs and Ham and One Fish Two Fish Red Fish Blue Fish Scott ODell, Island of the Blue Dolphins John Barth, The Sot-Weed Factor Jean-Paul Sartre, Critique de la raison dialectique
The Time Machine Psycho Spartacus Exodus The Apartment Inherit the Wind The Magnificent Seven Oceans 11 The Alamo The Andy Griffith Show (first episodes) The Flintstones (first episodes)
It seems like an extension of trademark law with continued registration and payments on the characters would be a compromise that could break lose the original book / movie.
I would imagine many would hate to give up the ability to create new, original stories or do mashups, but we won't get that anyway.
Unfortunately I'm not sure I can feel the same optimism. Copyright law has long been beholden to corporate interests. For them, it's only fine if it keeps getting expanded indefinitely. The more restrictive the terms, the more likely that others will have to license their works and the more they can bludgeon any offenders.
Your view is really tied to the present day, but what about all these works that aren't available on computers that need to be desperately digitized? I think we already have a natural curation mechanism: something popular and important gets reproduced in some way; it gets quoted, adapted, tweaked. Those are the works that need to be prioritized, but in doing so we might miss out on undiscovered gems or things whose importance will only be apparent in hindsight. Also, storage is cheap nowadays, we can easily just keep everything, index it, and let others sift through it later. (one example is the Internet Archive, we won't know what role these stored websites could play in a few decades like the Geocities archive)
Whole point of gen2 hardware is it should allow fully autonomous cars, even in rain which others likely wont be able to do, once the software is fully figured out. Next year will be exciting to see how quickly they can push updates towards that goal, as well as how quickly regulation can catch up.
The software clearly didn't follow the flow of traffic; it was rigidly locked in going the speed limit. That may be OK, but it was a very empty weekend and everyone was going 10-15 miles over the speed limit. So had quite a chain of cars backed up behind him and like a half-mile in front of him before the next car.
It was clear the Tesla software wasn't smart enough to close the gap by accelerating above the posted limits, or move him over to the right-hand lane where he wouldn't block traffic -- we all ended up having to pass him on the right.
I couldn't help but think, "Great... just what we need more of... simulated old people driving slow in the left-hand lanes and cars that encourage jackasses on their phones to take pictures instead of paying attention."
But... to be clear, I still want one. =P
It gives a new meaning to the word "crash"
The sites are created using Middleman, a ruby static site generator which I've found to be a little bit more flexible than Jekyll.
We do not host our own checkout. Instead we use Shopify's ancient and way under-publicised "Cart Links" feature. Cart Links let you pre-populate a cart and send the user to the checkout if you so wish.
To upload the static files to S3 we use an awesome program called S3_website which knows how to look for the rendered html from a number of static site generators, and sync it to S3. It's also smart enough to setup redirects, invalidate CDN caches and even gzipping content. It's freaking amazing.
 Middleman - https://middlemanapp.com
 Shopify Cart Links - https://help.shopify.com/themes/customization/cart/use-perma...
 S3_website - https://github.com/laurilehmijoki/s3_website
I don't get it. I've been using a static site built in Jekyll, which just works(tm). I recently rebuilt my blog with AMP compliance, and it still looks the way I want it to.
If you're not Ruby person, there's Hugo as a Go alternative. For blogs, we really should be seeing the end of maintenance, vulnerabilities, and static pages are cheaper to host.
Edit: of course, I answered the blog question, but not the ecommerce one.
It is just as Open Source. However, it's newer, but has just hit stable recently. A strong plugin system. Purposed built for CMS from the ground up, built on Laravel with all the nice things.
- No DB. Use a flat file layout similar to Jekyll or static site generators.
- Offers dynamic features like redirecting and custom routing when you need it. This isn't possible with a pure static site generator.
- Decent optional panel to write, edit and manage almost all aspects of your site.
- Quite fast once you set it up with good caching.
Other options are to go with static site generators like Middleman or Hugo for your blog and setting up a shop on Shopify or Sellfy.
As a side note, I have open sourced the code  for my shop/blog that is running at https://www.authenticpixels.com. It is written in Elixir/Phoenix.
For CMS I really can recommend http://mezzanine.jupo.org/
It also includes a Blog, is in Python/Django and is fulfilling all needs from very small to very large sites.
In addition to SSL support, ikiwiki offers several other useful features for constructing a blog or news site, such as "take all the pages under blog/* and emit a page with the last 10 in reverse order, including an RSS feed".
Do you want to customize everything with ease? Do you have someone who can maintain the security of your blog and eventually add new features later?=> WordPress on your own hosting
Do you want to use a very classical theme? Do you want a complete service with no skills required and no maintenance?=> Wix or WordPress.com
Do you want to customize everything? Do you have someone to maintain the blog on a regular base?=> Jekyll on GitHub or on your own hosting
Shall you need to sell things online with the blog, WordPress can be augmented with Shopify, meanwhile Wix has already this feature.
It has a local webserver, spell check, optional image compression, and minimal dependencies.
I don't get the need of Jekyll or Hugo. They're bloated and it's a pain to customize so called "themes". I'm OK with 'boring' HTML and CSS.
I don't like WordPress for eCommerce, but I think it's great for blogs and content sites.
For eCommerce... just too many variables. Who you want to use for fulfillment, what other systems you want to integrate with, if you need a staging instance or customer loyalty software or any of the 50 other things you can integrate.
For content... Ghost is OK. Just... WordPress has thought of everything already. Plugins, solid UX, extra features you didn't know to ask for... it's hard for other platforms to catch up.
I wonder if the couple of dozens of lines of assembly code could be trivial enough to be public domain. Assuming a straightforward implementation, surely there is far less freedom in expressing the simplest version of the echo program in ASM compared to, say, C?
touch foo.txt; chmod 400 foo.txt; echo ouch > foo.txt; echo $?
.., but it appears this asm returns zero always.
In the past while contracting I was usually asked to include in my proposals estimates for tests.
The tests failed to be useful, simply because they were written after the feature was actually implemented! We knew better, of course, but this was done in order to quickly get builds a client could look at.
Then when clients wanted to add/change features guess what got cut to make up for the time? Thats right, the tests!
So the tests were always secondary, and the projects tended to suffer as a result.
Recurring "fixed bugs" cost more than just working hours to fix.In the eyes of a client or customer, they are so much worse than a shiny new bug.Tests can help catch recurring bugs before a client/customer does - and save you not only time,but from losing your customers confidence.
Now, I'm building my own app and I'm using a diciplined TDD approach. I didn't start my project this way as It seemed overkill when it was just me. But I saw early on that to not practice TDD even solo was madness.It is taking longer, but my actual progress is consistent, and I'm already far more confident about the stability of the app.
Sometimes my tests just start as a bunch of prints to see the results visually. Then when I'm happy with the results I convert these prints to assertions and the playground becomes a real test suite.
Particularly enjoyed the emphasis on regressions. I converted to testing when working on a relatively complex data transformation. This was replacing an existing, scary data transformation process that was hard to test (we'd run new code for a few days and do a lot of manual examination), so I made extra certain to design the new system so it was testable. Catching regressions in test, especially for data processing, is just so much better than catching and repairing them in production.
As the article notes, writing all tests first is unreasonable because you won't know all implementation details until you have to make them; the tests I write first are thus functional tests, nowadays with cucumber.
Writing tests after coding is lacking, philosophically, because you often spend your time defining the abstractions and then just rewriting a verification of that abstraction in tests, plus some null checks.
The balance I've been using has been to write tests for abstractions I come up with, one my one. If an abstraction is decoupled and encapsulated, the unit tests come naturally. If i have to write a lot of mocks for an abstraction, that often tells me it isn't cleanly decoupled or simplified.
Furthermore, as you write tests as you go this way, you often find yourself writing the same support code more than once, at which point you notice it and find abstractions in that support code; this ends up explicitly giving you a view of what conscious and subconscious assumptions you have about what inputs you are expecting and what assumptions you have made. This is often enlightening.
> Ive never really subscribed to any of the test-driven-development manifestos or practices that Ive encountered.
I feel the exact opposite. I've worked in project with a lot of legacy code, both with BDD or with UTs that we added latter on.
Even with the best intentions the latter always failed: we always ended up doing a lot of unreadable tests that had no meaning and that we were afraid to look at.However, when I was working in a team fully committed on BDD, we were looking at the tests before looking at the code, the tests were in the center of the developing process, and we were able to write fast, solid, and simple tests.
Nowadays, I'm more interested on articles that understand that tests can be a pain too. And tbh I don't really trust articles that aim at a high coverage without talking about the different challenges that comes with tests.
I also find this a much more favorable approach than pure TDD. In my opinion, This method is easier to "sell" to other developers.
1) Make changes2) Manually test & run automatic tests3) Write automatic tests for each problem/bug discovered4) Repeat
This only works for decoupled code though. If all units are coupled you must have automatic tests of everything as no-one can comprehend exponential complexity.
I suspect there's not much formal testing (at least done or required by Linus, some external projects may be available). So it seems that testing isn't that necessary for a quality project? On the other hand Linux has a large community so maybe that substitutes for a comprehensive test suite?
I do wish the reasoning had been explained to me far earlier as I might have been able to really recognize the testing as useful and not just another strange requirement.
(The code was actually already structured for testing, I just hadn't written them because of that coverage number....)
I am still running main, by the way, but that's a different invocation called "system tests" which runs if unit tests pass (and after the coverage report).
The reason is simple: it tests your tests. I have many times found bugs in tests that made them always pass.
EDIT: This explains the concept, and gives a minimal approach to testing (i.e., you should test more than this, but at least this). Of course, there are tools to automate this, but not for every (new) language.
This is good advice.
On a previous (technical-debt ridden) project I did a little measuring and there was a pretty clear hierarchy of test value - in terms of detected regressions:
1) Tests written to invoke bugs.
2) Tests written before implementing the feature which makes them pass.
3) Tests written to cover "surprise" features (i.e. features written by a previous team that I never noticed existed until they broke or I spotted evidence of them in the code).
4) Tests written after implementing the feature.
5) Tests written just for the sake of increasing coverage.
Surprisingly 5 actually ended up being counter-productive most of the time - those tests detected very few bugs but still had a maintenance and runtime overhead.
* Code coverage's value: code coverage is not a goal in and of itself. Seeing 100% code coverage should not make you feel comfortable, as a statistic, that there is adequate testing. If you have 100% coverage of branching, you might have indeed verified that the written code functions as intended in response to at least some possible inputs, but you have not verified that all necessary tests have been written - indeed, you cannot know this from this simple metric. To give a concrete example: if I write one test that tests only a good input to a single function in which I have forgotten a necessary null check, I will have 100% code coverage of that function, but I will not have 100% behavioral coverage - which brings me to the following point.
* What to think about when unit testing a function, or how to conceptualize the purpose of a unit test: unit tests should test behavior of code, so simply writing a unit test that calls a function with good input and verifies that no error is not in the correct spirit of testing. Several unit tests should call the same function, each with various cases of good and bad input - null pointer, empty list, list of bogus values, list of good values, and so on. Some sets of similar inputs reasonably can be grouped into one bigger unit test, given that their assert statements are each on their own line so as to be easily identifiable from error output, but there should nevertheless be a set of unit tests that cover all possible inputs and desired behaviors.
* Unit test scope: A commenter I responded to in another thread had given criticism along the lines of that by making two unit tests which test cases A and B entirely independent, you fail to test the case "A and B". This is a misunderstanding of what the scope of a unit test should be in order to be a good unit test - which, incidentally goes along with misunderstanding the intent of a unit test. A unit test, conceptually, should check that the behavior of one piece of functional code under one specific condition is as intended or expected. The scope of a unit test should be the smallest scope a test can without being trivial; we write unit tests this way so that a code change later that introduces a bug will hopefully not only be caught, but be caught with the most specificity possible - test failures should the engineer a story along the lines of "_this_ code path behaved incorrectly when called with _this_ input, and the error occurs on _this_ line". More complex behavior, of the sort of "if A and B", is an integration test; integration tests are the tool that has been developed to verify more complex behavior. If you find yourself writing a unit test that is testing the interaction of multiple variables, you should pause to consider whether you should not move the code you are writing into an integration test, and write two new, smaller unit tests, each of which verifies behavior of each input independent of another.
* Applying DRY to test setup: if you abstract away test setups, you are working against the express intention of each unit test being able to catch one specific failure case, independently of other tests. Furthermore, you are introducing the possibility of systematic errors in your application in the _very possible_ case of inserting an error in the abstractions you have identified in your test setup! Furthermore, f you find yourself setting up the same test data in many places, that should not suggest to you to abstract away the test setup - to you, it should rather hint at what is likely a poor separation of concerns and/or insufficient decoupling in your software's design. If you are duplicating test code, check whether you have failed to apply the DRY principle in your application's code - don't try to apply it to the test code.
And, in my opinion, the most important and common misconception I see here, and I really feel that it should be more widely understood - and, in fact, that many problems with legacy code will likely largely stop occurring if this mindset becomes widespread:
* Why do we write unit tests?
We write unit tests to verify the behavior of written code with respect to various inputs, yes. But that is only the mechanics of writing unit tests, and I fear that that is what most people think is the sole function of unit tests; behind the mechanics of a method there should be a philosophy, and there is.
Unit tests actually serve a potentially (subjectively, I would say "perhaps almost always") far more vital purpose, in the long term: when an engineer writes unit tests to verify behavior of the code he has written, he is, in fact, writing down an explicit demonstration what he intended the program to _do_; that is, he is, in a way, leaving a record of the design goals and considerations of the software.
(Slight aside: in my opinion, being a good software engineer does _not_ mean you write a clever solution to a problem and move on forever; rather, it means that you decompose the problem into its simplest useful components and then use those components to implement a solution to the problem at hand whose structure is clear by design and is easy for others to read and understand. It further means (or should mean) that you then implement not only verification of the functionality you had in mind and its robustness to invalid inputs which you cannot guarantee will never arrive, but also implement in such a way that it indicates what your design considerations were but serves as a guard against a change that unknowingly contradicts these considerations as a result of a change made by someone else (or yourself!) at a later time.
Later, when the code must be revisited, altered, or fixed, such unit tests, if well-written, immediately communicate what the intended behavior of the code is, in a way that cannot be as clearly (or even necessarily, almost definitely not immediately) inferred from reading the source code.
In summary, these are the main points that stuck out to me in the conversations here; I do want to emphasize that the last point above is, in my opinion, the most glaring omission here, because it is an overall mindset rather than a particular consideration.
Obviously not cross os, but might be good for bare metal stuff. I've gotten libraries in the past compiled with weird ABIs. This sounds really neat.
Very noble goal, but I can imagine that it can take a lot more time to do that than just writing a bunch of assembler instructions.
Perhaps there could be some intermediate approach, where LLVM can learn from a IR/assembly pair and improve itself (?)
The biggest reasons to drop to assembly is because there are high level constructs that the compiler is very unlikely to recognize and optimize effectively. Things like AES-NI, hardware RNGs, and similar.
It also wouldn't tie me to any particular library - I think the only actively maintained one is the C++ one.
Doesn't AS/400 use an IR approach as well? Which let IBM seamlessly migrate the underlying CPU a few(?) times now?
Any LLVM experts have thoughts on this or my original goal within the context of LLVM's current situation?
If you're starting from a clean slate, what's the benefit of writing IR? Why not use C? After all, IR won't really give you complete control over generated code, and it's still an abstract VM (albeit that obviously allows writing IR that will only sensibly compile on a specific arch - e.g. system register acceses and so on).
Some CPU have specific idioms that are not only hard to translate but requires to be used fluently. Like natural language.
Btw, I never uses any software relying on a name of a myth that was a pure failure such as Babel or death star. It makes me feel like people intend to fail.
Somehow I'd never seen Amit's work before, and Bret's only very recentlydespite being inadvertently bitten by the same bug as both of them a couple years back. Granted, a conservative estimate puts either of them as being about five hundred times more productive than myself.
Bret's work in particular is humbling. The "explorable explanations" concept was something I'd given a lot of thought to, and it turns out Bret had dedicated an entire article to elucidating it back in 2011, years prior.
Perhaps the greatest irony of being obsessed with accelerated learning is that while you're trying to build the tools or technology to enable it, you find yourself wanting the very thing you're building. e.g. "I could build my magic learning computer much more quickly if only I had a magic learning computer!" While this is frustrating, it at least serves as constant validation, as you try to force yourself to pay attention to some dry, overly-verbose reference on a particular subject.
I love the blog. I love every time a new post comes out. But this reads as of 2016 was a total failure where nothing was produced and no goals achieved.
How did Amit live over the past 5 years? Did he have no income? Did he do non-developer work making Red Blob Games just a part time endeavor?
I'm confused. :(
MLC is half as expensive as SLC. TLC is 33% less expensive than MLC. QLC is 25% less expensive than TLC and 75% cheaper than SLC. Not to mention transparent compression algos. As the controllers improve you can get more bits of storage from the same amount of flash for free. Longevity and reliability suffers, but hey, cheap SSDs!
Ram only gets cheaper by improvements to semiconductor processes, which also can be applied to make flash cheaper. (Big fat asterisk, those processes are very different.) While improvements to flash that allow more levels per cell can't be applied to ram. The price difference between flash and ram will only continue to grow.
Modern flash is quite "analog". The first company to figure out how to reliably store 32 voltage levels per cell (Five bits. PLC?) will make a quick billion.
Today 2GB/s is considered very good for an SSD but that would be brutally slow for system memory. DDR4 memory is typically 30-60GB/s per bank with the low end being two-channel, the high end being four.
DRAM has also been the subject of aggressive research and development for many, many decades while large-scale production of flash is a relatively recent phenomenon. It's the widespread adoption of smart phones, thinner notebooks, and ubiquitous USB keychain type devices that as pushed it to the volumes it's at now.
There's also the concern that DDR memory must have a very high level of data integrity, bit-flip errors are severely problematic, and it can't wear out even after trillions of cycles. Flash has more pervasive error correction, and while wear is a minor concern, it's still possible to exhaust it if you really, really try.
I'd say the reason flash memory prices are steeply down is the new "3D" process used by Intel and Samsung has been a big game-changer, allowing for much higher density. DRAM has seen more gradual evolution through the last few generations.
The demand for slow RAM drops precipitously after the whatever Intel chipsets use it stop being used in new systems (not sure if the same is true in the embedded market). For example, nobody's buying DDR2 these days. So the economies of scale dissipate and fabs retool faster.
So while both devices have economies of scale, SSDs have an extra dimension to their demand curve for performance that allows for slower higher density chips to still be profitable.
Theoretically RAM could be built that way but it would be much slower. Every cell read/write would need to go through an ADC/DAC, and the noise is much higher due to leakage. This slowness isn't much of a problem for FLASH because its competition was spinning disks that were slow as molasses anyways.
At least at the retail level in the Best Buy where I worked until recently, I watched Solid State drives transition from something only high end computers had to something that was standard even among the lower priced value machines. We had customers complaining about the smaller drive sizes because they were so accustomed to the gigantic storage offered by the spinning disk media at its height in popularity.
I'd love someone with more industry knowledge to chime in though, as my own experience here is pretty limited. This is simply what I've observed in my own corner of the world.
As a second bonus, even on old systems SD card circuits are relatively small (compared to a 5-60" LCD they certainly are). Wafers are round and old wafers are used to manufacture LCD displays, so small chips can be placed around them in the manufacturing process and get really good economics by having lots of manufacturing options.
So same reasons displays are getting cheap, except they're even better. So the race to the bottom is happening pretty fast for SD cards.
Not entirely sure about this. Might be entirely wrong, but I'm not sure how to confirm this.
China has decided to pour in 10s of Billions into the NAND and DRAM industry by 2020, until then the price should very much stable / predictable.
Is it the technology?
* Flash cells can store more data an be produced cheaper per cell. But they are more complex to read out and slower.
This can explain some factor, but the factor of 40 given by OP probably not.
* Flash and DRAM probably use different processes.
This could explain a bit but look at the next point...
* DRAM has a much longer history and (at least in the beginning) much higher capital investment.
...which means that DRAM should have the technological advantage. At least through economies of scale.
Is the cumulative investment in flash research already much bigger compared to DRAM research?
Is the process used to produce flash memory so much easier?
Is it the market?
* Obviously people pay the price.
* With DRAM people are hungry for performance more than they are for size.
* We already have more than enough DRAM. The latest MacBookPro demonstrates that 16 GB DRAM is enough for just about everybody but flash storage goes up to 1 TB.
* Of those 16 GB DRAM the speed and power consumption are much more important than the raw size.
Coming back to the cumulative investment. I think that the primary pain point for flash has been the price per GB. Flash could be stronger, faster, more reliable, less power consuming but those are all secondary. It is fast and reliable enough by using very complex RAID controllers. The power consumption is not as bad as HDDs already use a lot and the data mostly just sits around. The main driving point is the price per GB. This is where the money goes in flash development.
On the other hand for DRAM, after some point, it is mostly speed and power driven. Reliability has to be comparatively high as every cell must work over years. Size is mostly increased by improving semiconductor processes where flash probably uses a lot of the same technology. Using the layer stacking technologies of flash is probably not yet applicable because it is not reliable enough and not compatible with the cell layout, maybe it never will.
If we really were hungry for so much RAM we would probably get it. But we aren't. It's good enough. Progress slows down.
DRAM is also a much more mature technology than flash is, so more of the low-hanging fruit for improvement has already been taken advantage of.
I ran into this when I tried to explain the sofa stuck in the staircase mystery in Dirk Gently's Holistic Detective Agency. She (a Ph.D. in robotics specializing in dynamics and physics) pointed out an idealized rigid system could jam in this way without any additional exotic explanation (beyond the exoticness of idealized rigid physics).
These days most if not all people no longer have screen savers, so that wish is likely to forever remain unfulfilled.
 I also wanted the rotating Starbug wireframe from Red Dwarf, but to my knowledge no-one has done that either.
I wonder if the maths is related at all.
Part of the fun of moving is trying to figure out how to orientate furniture to get it into a room - or out of the room, since someone already got in there so of course it must come out.
I just want english so I can have some kind of understanding of what I'm doing but I can't find the setting either in my user profile nor in the app configuration.
This was an issue when I tested Gogs a few months and I don't see any mention of a cache so I think it's still an issue.
For smaller repos though, Gogs works incredibly well.
Regarding this fork, it makes sense if the owner of gogs is not giving write access to others. At the same time it would be a shame if Gitea becomes popular and overshadows Gogs. I hope they can work out a mutually acceptable solution and merge.
Sure, community managed sounds great, but does that automatically guarantee a solid project vision, predictable release cycle and lots of new features?
Don't want to sound negative but I think reasons for the fork needs to be clearly presented and potential switchers (like myself) got to be assured there's a better future with the fork rather than original
Anyway, I think this is how OSS should be. We shouldn't have to force people to merge the code that we want. At the end of the day, it's his project anyway.
Good luck to both of projects.
Now if the project has potential or takes off a 'community' should fork it only if all other avenues have been exhausted and with good reasons.
Its important for a 'community fork' to let the community know the exact rationale or 'community' can easily becomes a excuse for some to seek to control or capitalise on others work. This does not help open source especially if the main developers are too busy developing while those who fork have time to market the fork to a community.
Sure, 'copy' the features, and take inspiration from the UI and layout, but so much of gogs looks identical to GitHub that it's nothing but stealing.
Recently running 16.04 on it as well, once again an update without any issues.
Whilst on my other machines (An HP laptop and custom destkop) I always had _some_ problems with Ubuntu or other flavours of Linux. The HP laptop had the movie player problem mentioned here in the post, and had some issues with running Skype webcam/voice-chat.
The desktop had an issue of freezing up randomly, and some audio issues at first.
Currently Ubuntu is running on all these machines, but the old Lenovo laptop was the only one that in all these years worked without any issues.
The line has Intel quad core CPUs, minimal bezel (my 15" is almost the same size as the 14" System 76 Galago Ultra Pro it's replacing), reasonably slim for a quad core, 84Wh battery, 10+ hours of low power dev (baseline power is about 5.25W on my 8GB + 1080p + Xeon E3-1505M machine) in (Arch) Linux? YES.
Oh yeah, and for nerd points the Dell Precision M5510 has the option for Intel Xeon and Ubuntu stock for people doing CPU intensive Linux work (in my case Linux embedded system builds that grind for tens of minutes to two hours).
To add icing on the cake, you can easily get parts (batteries, motherboards, etc) on eBay if you ever need to fix it yourself which is a sharp contrast to the non-existent System76 Galago Ultra Pro I picked up a few years ago after I ditched my last Thinkpad.
I keep looking at the Thinkpads, but they seem a generation behind.
What I missed at Dell and Apple is the possibility to configure your hardware a bit so that it better fits your needs. This was better for Dell when I purchased the Dell Latitude 7 years ago.
I did not choose an MBP because I feel safer with Linux in the long run. I heared that the security updates stop two years afterwards and the software upgrades makes the 'old' hardware a lot slower.
In the end every OS somehow sucks, but Linux sucks least.
I think something is slowing down your boot, I get faster boot on a 2008 thinkpad running the same OS.
OT: systemd was supposed to improve boot performance but it has actually become much worse. Upstart on a weak chromebook boots in under 2 sec, why shouldn't your current generation thinkpad with a fast SSD match that?
Seriously Thinkpads are the best dev laptops ever.
I am selling my much newer MacBook Pro retina now, as the Thinkpad is so much more functional, the keyboard is amazing the feeling of the machine itself is fantastic.
I am thinking to buy a X260 brand new because I need a newer CPU and better battery life, but for sure I'll only buy Thinkpads or Latitudes (I have one at work, amazing machine) from now on.
Personal Machine: (Ubuntu/CentOS with 1 or 2 VM running sometimes)For me: AMD Quad Core - A10 7300 , with 8GB DDR3 RAM and 1TB HDD (acer aspire e15) is perfect Linux development machine, it costs less than $500 . Unless you are running 3 or more VM or stuffs like high-end data processing using 16GB RAM for development is worthless.
Work machine: (Windows / Fedora-19 with 3VBox vm running most of the time)We (team of 7 members) received new Lenovo thinkpad in 2012, with 256SSD, 16GB ram, and i7 processor. Within 18months 3 or 4 of my friends faced hardware related-issues (suddenly stopped booting etc). Luckily mine survived until I left the company in 2015.
3. Same resolution but smaller monitor and smaller overall size makes for easier traveling imo.
7. I have the exact same issue with hitting the touch pad when typing, but I've learned to go slower and avoid it.
8. I first ordered the x230 by accident with the larger battery and was amazed at the working time of 12-15 hours but it was also quite bulky. So I re-ordered with my missing keyboard backlight and with the slimmer battery and I'm quite happy with the slimmer form-factor while still having a good 6 hours of work time.
11. It's clearly not a media machine, it even lacks shortcut keys for pause/play media.
Coreboot support too if that's your kind of thing.
Stay away from the tablet x201, forget that noise.
All of Lenovo's offerings seem overpriced as with the industry as a whole.
Tax evaders from USA are safe.
Governments, especially so-called "progressive" governments have the wrong belief that our money belongs to the government first and citizens are permitted to keep their 'share.' I know many people here think that a total loss of financial privacy is 'fair,' because of some sort of misguided class-warfare, however how would you feel if your entire computer and web history were transmitted to the government with the purpose of identifying crimes you might commit as opposed to being in response to a warrant that establishes probable cause that you actually committed a crime. Having money in an overseas account does not even come close to reaching a threshold of probable cause for tax evasion. However having overseas money (or in my case, a domestic French account,) doesn't mean you are a potential criminal any more than being black means you are a potential criminal. This is financial "stop and frisk."
In the US this loss of financial privacy and these 'disclosures' amount to a warrantless search. I welcome any counter argument.
Just to be clear, I'm not a 'fat cat' billionaire -- just an American living in France that has the 'privilege' of being taxed (heavily) by both the US and France. My French bank has to disclose information about my accounts to the US Treasury and I have to fill out an FBAR each year to 'prove' to the US that I am tax compliant, thanks to FATCA.
If it weren't for the obvious class warfare aspect, liberty minded progressives would be losing their minds over this violation of the fourth amendment. They'll howl over warrantless wire-taps but many remain silent with the financial equivalent of wire-taps.
I am not advocating tax evasion, nor do I support or defend those actually evading lawful taxation; I am opposed to a police state where everyone is a criminal until proven otherwise.
With the rise of synthetics, and the first maker-chem-bot not far away- i wouldn't wonder if the Narcos would do what all businesses do, if the change is upon them- lobby for protectionism.
Im against drug liberalization, but just to reduce the economic and social Fallout this would be worth it.
The usual web link didn't work for me this time.
Genetic engineering has exactly the same problems that modifying binary-only code. After some very complex observation of the system you may spot the point that is responsible for a certain problem, but you will never be sure (as in provably sure) that the "fix" you put in place:
1. is going to stop all the occurrences of the behavior you want to fix;
2. will only affect the behavior you want to fix;
3. will have no effect on other systems that you did not modify.
(The article is mostly about 1 and a bit about 2 and 3.)
Modifying legacy code is hard enough when the uncommented source code is around. Changing directly the binary is both a great engineering feat and something to be scared of.
Normally, eggs provide all mitochondria. The sperm's mitochondrion gets left outside during fertilization. So in most zygotes, there's a new pairing between the father's nuclear genes and the mother's mitochondrial haplotype.
I wonder what that entails. I do understand that most zygotes fail to implant, or get terminated very early in development. Maybe this is one of the causes. Anyone know?
So it might happen that even if 100% of 'old mitochondria' is cleared the new symbiosis will also be defective as some of the 40 genes required by the hosting cell might be missing with new mito lineage replacing the original one.
As the article notes the procedure isn't effectively clearing out all of the old mitochondria, and dynamics can favor them in the future.
It is known that variations in mitochondrial DNA produce radical differences in competition between mitochondrial strains in the cell. That is how deletions affecting OXPHOS machinery cause one mutant to take over the whole population very quickly - some differences produce mitochondria that either replicate better or resist quality control more effectively. That one is one of the causes of aging, but the same principle exists for other differences between mitochondrial genomes. If you put two or more in a cell and let them fight it out, hard to say in advance what the outcome will be given present knowledge.
So, basically, the people working on mitochondrial replacement need to make their tools for cleaning out the old mitochondria more efficient. If 100% success is achieved, that genome isn't coming back.
Alternatively, actual gene therapy might be a better approach - though challenging if you want to edit mitochondrial genomes, as you have the same problem of coverage and competition. There is allotopic expression, moving mitochondrial genes to the cell nucleus, which is feasible via today's gene therapy. Given the amount of work needed to copy mitochondrial genes into the nucleus, however, the challenge being how to alter them so as to get the proteins produced back to the mitochondria, something that has been achieved for three genes so far, it might be more cost-effective to work on better clearance and replacement technologies for the near term of assisted reproduction needs.
It's not a risk of editing the genes so much as a risk of the treatment failing.
Makes me wonder if I'm just influenced by the work happening in ML, or if we are really approaching what the brain is already doing.
The real question is where these "fake" pieces of art will be placed in our society.
I'm not sure why, but the fact that they used the term "hallucinated" is a little unsettling.
And hopefully, self driving cars and quadcopters and other drone robots can use a 3d model of a city, to better navigate the real world. How about teaching a self driving car, with the GTA 6 - Your City game?
SuperTuxKart is a free, open-source racing game. This page is about generating 3D levels for the game, using OpenStreetMap data.
How do they figure out the building heights at all then?
My personal suggestions are Duolingo, and "Drawing on the Right Side of the Brain" by Betty Edwards. I went from stick men to badly-proportioned but otherwise lifelike still-lifes in a few hours with this book. I have a very strong audio memory so Duolingo works well for me. The most important aspect to getting not-terrible at anything is deliberate practice . Drills, and boring exercises work very well for me.
Drawing is a giant world that means many different things. Being good at drawing is also very subjective.
A fantastic example of that is the book "Drawing from the Right Side of the Brain" which I see mentioned already. That book is an interesting read and I did enjoy it myself, but I should caution that it teaches more about visually tracing. Some people consider that an example of skilled drawing and if that is what you are looking for then go for it.
From another angle some people consider skill at drawing to be how pleasing it is to look at. This generally has more to do with the line work and shading and color usage. You can draw a significantly anatomically inaccurate arm with beautiful line work and styling and some people would consider that skilled drawing.
Yet another would be to create something from the mind without a visual reference. This has more to do with an understanding of mass and depth and space than either of the two above. And to some people this is what they would consider skilled drawing.
These are only three of the many, many possible goals of a drawing.
Why am I telling you this? Because to me the endeavor of learning to draw is learning what you personally consider a good drawing. The physical world is not made up of lines and smudges. When you draw you are continually making those translations and decisions. That process of discovery is what will lead to you become better at it.
In the end, there are only two reasons why you put a line in the wrong place. Either you physically missed the correct spot with you pencil, or more likely, because you haven't discovered where the right spot is yet.
What it comes down to is spending time getting your hands dirty making things (or getting real practice), even if your output sucks for a long time. (And it will.)
Favorite personal example: One day, when I was around 19, I decided that I wanted to be an artist. I hadn't seriously drawn anything since I was about 10. My current skills were atrocious, but I started drawing every single day, anyway, undeterred.
Of course, at first I was awful. But I copied old master paintings, drew pictures of famous sculptures, etc.; all of my free-time, I spent drawing. And slowly, but surely, I got better at it. I did this every day for probably about 3 years or so, and by the end of it, I was very accomplished. But it was a constant effort that took years. I probably did over 1,000 drawings, hundreds of paintings, and so on. And about 90% of them were awful. But the good stuff, it was really really good. I guess that's the price sometimes. Nobody is a genius all the time. Even Michelangelo, or Picasso.
The thing is, if you find something you enjoy, it doesn't feel like work or drudgery. (Even though drudgery is the only way to get better.) Instead, it's an activity that you want to spend time on, and when you do--time passes so quickly you don't know where it went. It's like living life on fast forward. (Maybe that feeling's the real-life inspiration for the old trope of the training montage. A deep kernel of truth beneath the fantasy, after all?)
I started studying (math) at university this fall for the first time in my life. I'm 32 years old. I have extreme problems to adjust myself to the workload that is required.The other freshmen struggle as well, but I have clearly more problems.
The problems arise most noticably when I'm not subverted to direct peer pressure, that is, when I'm not sitting in university to do homework with my group partners.As the workload is (or seems) so extreme, at least for us freshmen, I just didn't have time to do anything else than sit in uni to do homework, often until 8 or 10 pm or even into the night when there was a deadline the next morning.
What I should have done differently so far is prioritizing the learning of material over just trying to get stuff done inefficiently.I realise that these inefficiencies and getting rid of them are a normal part of growing up academically.
The procrastination problem starts to show up most visibly in my spare time, where I have the time but just cannot bring myself to learn the material. This is where the book really helps. I admit, I just finished it and it will take some time to show results.The thing is, I knew for years (which I have wasted partially) that gaming, reddit, HN, twitch.tv, etc. are a strong negative influence for me. The book helped me realise just how bad my procrastination problem really is and it already helped me be more productive in situations where otherwise I just couldn't bring myself to work on important stuff due to distractions.
* Positive reinforcement: cheerful noises and visual progress once I completed a section.
* Negative reinforcement: if I didn't practice at all that day, I would get a notification at 11pm. If I ignored for a while, they were super passive aggressive, saying things like "These don't seem to be working. We'll stop sending them". I felt guilty and would start agin.
* I wanted to keep my daily streak going. It made me feel like Jerry Seinfeld with his "write a joke a day; put an X on the calendar" technique.
* I liked the concepts of experience (exp) and levels; it let me feel like I was making concrete progress, even if I was totally incompetent. I indulged my gamer side while still being productive!
* duolingo works just as well on browser as mobile
* Training sessions were short enough that if I only had a few minutes of downtime, as long as I had my phone on me, I could actually be productive. This made subway rides that much better.
* duoolingo offers a practice mode where I could work specifically on my speed if I had longer chunks of time and wanted to dive deeper.
And today, I can totally speak intelligible Swedish . It worked.
 https://www.youtube.com/watch?v=OxoXe5FDIkA some phrases are off (I should've said "hur man skter marknadsfring sjlv") and pronunciation is off.
The concept goal: Learning Japanese! (I've been 'learning' for 3 years now).
Here's a screenshot: https://i.imgur.com/afAW49V.png.
Basically it's a moodboard that combines a timer, todos, insights, notes, images and links.
'How does that help me learn new things better' you ask?
The challenge with goals - as I see it - is keeping the path from the you of today to the aspirational you (the one that speaks Japanese) clear. January kicks off, a whole bunch of life gets in the way and when you finally get time to focus that path has become a nebulous mess.
The idea with Goals is to be able to open the app and immediately know:
1. What have I achieved so far: insights on hours spent, tasks completed and how close you are to your goal.
2. What do I have to do next: this is your "how". Tasks, links, audio files, notes.
3. Why am I doing this, again? images, media, notes.
The 'Why' doesn't really fit into most methods of learning but I think forgetting this is the biggest point of failure.
I'm going to grab a coffee and get this shipped. I'll post it to Show HN tomorrow and you can see if it's for you.
A few things that have helped me so far:
- Setting aside at least 1 hour a day to draw. This one is the most important.
- Drawing from the Right Side of the Brain by Betty Edwards
- Fun with a pencil by Andrew Loomis
- Ctrl+Paint: http://www.ctrlpaint.com/ especially the Traditional Drawing, Composition, Perspective and Anatomy sections.
- Sycra's YouTube channel: https://www.youtube.com/playlist?list=PL0373FA2B3CD4C899
Don't overthink. Just do it.
And after a few months of trying and reading the process will get better automatically.
Key is to just start and be regular.
The ideal scenario is to become immersed so the topic becomes "part of you".
I also give myself a two year gestation period of incremental learning to see results and build muscle memory. I did this with programming and once you do it with one topic, you will build the confidence that it will come to fruition with anything else you decide to do.
I'm not a fan of peddling the idea that anything worth learning can be learned "fast". It may work for some people, but I think they are the minority. it's been my experience there's usually a fair amount of self deception involved in "fast" learning....or "fast" anything for that matter. This works for some people - I'm not one of them.
If you want to learn a new language then walk around with headphones listening to people speak it. When you talk to people in English in your head ask yourself how to say the same thing in the alternate language. etc....
Additionally, you need to write about whatever you're learning. Essays, reviews, whatever strikes your fancy. Learning really happens when you try to use the information.
This is especially true with languages, unless you can be fully immersed, but I still recommend it. Look into metacognition and spaced repetition. You'll need a system that works for you but look for techniques backed by research.
Specifically, Fluent Forever is a fantastic book on learning how to learn languages and Scott Young's blog mentioned is great.I second the art recommendation "Drawing on the Right Side of the Brain". Good luck!
He was in the process of studying Japanese at the time of the interview and had some specific comments about it vs the several languages he's learned before.
The key thing to understand is that when learning something new, you will likely be pretty bad at it for a long time until you have practised and researched it ALOT. Then you will find you have gained some expertise.
When you anticipate this, I think you are much less likely to give up on the basis of "I'm no good at this".
Anyone can become competent at pretty much anything, given effort and practice.
AND IMPORTANTLY, by "practice" I mean doing it for real, not doing training exercises.
This is how I'm trying to make it work, at least.
Also, I built a search engine for lectures, which has a lot of talks from tech conferences, which I find helpful for learning about software development topics - https://www.findlectures.com
It's pretty good.
The main idea is that learning is supposed to feel hard. That sense of frustration and confusion is what building new neural connections feels like.
I usually tell people who want to learn to draw to go to http://johnkcurriculum.blogspot.com/2009/12/preston-blair-le..., get the Preston Blair book, and start doing these exercises by master animator John K (creator of 'Ren & Stimpy'). You will get a lot better, a lot faster. These exercises focus on simple cartoon characters who wear a lot of their construction on the outside; once you can draw cartoon characters, you can keep drawing more of them if that's your thing, or you can build on top of that and start learning anatomy and drawing more complicated characters. (Or do both.)
There's other well-regarded drawing courses on the internet and someday I should probably pick a new one to send noobs to, what with John K kind of being an asshole - but I learnt a hell of a lot when I worked under him, and he is really good at teaching this stuff.
Most of what I know about drawing more complicated figures came from a combination of Bridgeman's "Constructive Anatomy" and Loomis' "Figure Drawing for All It's Worth", and a life drawing teacher who hewed very closely to Glen Vilppu's drawing manual. If you can fit some life drawing classes into your life then TAKE them, you will learn a ton.
Also: Make a space in your life to do this. I ride the bus a lot, and before the advent of smartphones, I'd have little to do to amuse myself besides stare out of the window, read a book, or pull my sketchbook out and draw. Maybe draw some idea floating around my head, maybe draw something I glimpsed out the bus window, maybe something based on my fellow passengers, maybe just some cubes, or the hand I wasn't drawing with. I got a lot of practice in without feeling like I was making myself "practice". Whatever you may be learning, if you regularly drop yourself into a time and place with nothing much to do besides the thing you wanna learn, then you'll do it more often.
Don't blow several hundred bucks on a ton of paints, or on pro software and a Wacom tablet. Just start with a few hardback sketchbooks and some pens and pencils. Oh, and not mechanical pencils. Just grab like a pack of Ticonderoga 2.5Bs, they're cheap and pretty good. And try holding them so that the side of the point addresses the paper for a lot of the beginning of your drawing; this will do several things for you:
* it will train you to keep your wrist fairly steady, and to draw more with your entire arm; keeping your wrist straight and steady will help keep the Carpal Tunnel Fairy away.* it will make your initial lines light, and prone to fade away as your hand brushes the paper; this keeps you from bearing down to gouge an impossible-to-erase line in the paper, and gives you more room to make mistakes before having a dark, illegible mess of lines you can't draw over.
Don't get lost in trying to save a drawing, either. Paper's cheap, turn the page and try the same subject again, or a new one.
When you make a picture you like, hang it over your drawing board, turn it into your computer's backdrop, and keep trying to draw something better than it. You may find yourself hating it because you start seeing all the mistakes. That's great - go draw something new that doesn't make those! (This may take many tries, some mistakes are harder to stop making than others.)
Don't worry about "your style". If someone points out a mistake in your drawing and you find yourself wanting to say "but that's my style!", then you are just covering up your weaknesses unless you can actually sit down and bust out a version of the drawing that Does It Right. When you can do that you can legitimately say "dis mah style". Steal stylizations from artists you love (you're looking at other people's art, right? A lot?), make your own based on reality.
You will find a lot of people declaring "rules" of drawing. Always do this, never do that. The truth of the matter (IMHO) is that all rules of art are actually just warnings: "never do this" really means "if you do this without thinking about what you're doing it'll probably turn out badly". Know the rules, know which ones you're breaking, and break the fuck out of them while staying well within the boundaries of the other rules you know.
(I spent a decade in the LA animation scene, then burnt out and draw comics now. If you wanna look at my work to decide if I'm someone who you should listen to in this, it's all at http://egypt.urnash.com)
Chinese TV sucks. I paid no attention to the ctl+c-ctl+v plots. However it did help me learn tonality in a way that the butchered American classmate pronunciations could never do. Subtitles also helped with learning characters.
I have found that learning works best with as much immersion as possible. It is never as casual as a subway commute crossword puzzle.
That's the whole idea of it, sonny.
> Learning a language is also pretty intimidating
If you're like most people, then the very experience of having learned your very first language, your mother tongue, was probably "quite the struggle, that you never experienced as quite-the-struggle, because you had no preconceived notions as to what constitutes quite-the-struggle".
Doubtlessly everyone keeps at it that finds it somewhat gratifying. Question then, when is it gratifying and when not? I posit it's gratifying not primarily when you garner praise or grades from others but simply when you realize you grasped things about it this week you didn't, or had no idea of, just the previous week.
Just be a kid, poke holes in everything, bend it, try to break it, combine everything with everything, laugh or marvel at what results.
Now languages and drawing are a bit different. What's the point of "learning to draw" when you can't draw the most outrageously "you" way. Don't draw "nicely", that should evolve over time. Draw what comes naturally to you. If only random lines come to you at first, great, that's the first annoyance that'll before long force you to figure out the trick to arriving at slightly-less random figures. Go wild. Languages are slightly different as at the end you want to comprehend and be comprehended. Maybe human languages are somewhere in the middle of the spectrum from wild drawing and highly-restricted formal grammars such as programming languages. If you keep tinkering at these extreme-ends-of-the-spectrum, as always things more-in-the-middle might fall into place a little more easily.
Where am I getting with all of this? Learning (anything) from first-principles by falling-down-and-getting-up and trial-and-error and not-constantly-assessing-your-current-proficiency is the long and hard way, but it's the surefire way and the natural way. And for many, certainly in this crowd I'd wager, the most gratifying one.
> Learning to draw especially is pretty overwhelming for me; I have no idea how to start, as someone with no skill or experience in drawing whatsoever.
Well what would be the point of learning if it wasn't overwhelming, if you already knew where to start and where to go from there, if you already had the skill and experience. I truly do wonder now what your definition of "learning" is ;D
I found whenever I invested much time in just enjoying in a deep, "almost professional-fulltime-fan" way, the works of highly skilled creators you respect and admire in a topic (painters/certain comic drawers, musicians when it comes to learning instruments or composition, or for languages brilliant authors of awesome works as well as perhaps poets/songsmiths) the repeated and active and prolonged immersion in their work can set the stage properly and "pre-seed your brain" in profound hard-to-explain-or-analyze ways. This very period of active admiration irresistably leads parts of the brain on a diversionary trail of "just how did they achieve all this brilliance" that'll keep finding new leads and cues to then prompt you to purposely proceed with in earnest.
Quite wordy, huh? I'm sure there's a 1000 handy "learning anything you want in 21 days" guides out there also. Shame I never felt the need to procure one, my I could be a master painter and most proficient converser in a whole host of languages by now! Wouldn't that be impressive. But this never seemed like fun. Wanna learn for fun and with fun, set small goals and even smaller expectations, and allow as much time as possible. Maybe it's just me but "I'd like to be a great painter (or French speaker) in 21 years" sounds like a much more delightful endeavour than in-21-days (or weeks). Because if that's the outset, chances are as a byproduct you'll already be "really quite decent, better than you expected" after 21 weeks to months but more importantly, by that time you'll no longer even worry about this, as keeping immersedly spending much time with X, Y and Z became just part of who you are as-a-rule.
That's probably the most wordiest way I've ever said "Just Do It and Keep At It". Well I've done my silly deed of the day, time to get back to my own hackery now.
a.) Print is a lot better than digital.
b.) You shouldn't read books linearly. I generally jump around a lot and read a particular book several times. The first pass might take just an hour or two, I generally try to understand the structure of the book, create a scaffolding of sorts, I might get 15%. During the second pass I might try to get the next 30%. I should have a good idea of the concepts of the book, I might not be able to solve all the problems. In the next pass, get the next 30%. The fourth pass is optional if you really need to understand 100%.The best part is that a lot of times, you don't actually have to do all the passes, the first two might be enough.
The one thing I always hated about school is how you are forced to master each chapter 100% before moving forward. Sometimes going forward actually helps you understand previous chapters because it puts them in context.
c.) Highlighting helps me a bunch. Some people have the issue that they might end up highlighting too much. I don't highlight when it's all new to me, but only after I might have finished the chapter, I'll go back and think about what's important to highlight. It feels like the process of selecting what's important might be more important than the highlighting itself. But when you come back to it later, the highlighting definitely helps. Writing some notes with a pencil in the book is also good.
d.) More important than fully mastering all the material is making sure that you aren't bored or frustrated. If you can't move forward with something, give it some time, come back to it.
e.) Generally if I'm confused, doing a quick review pass from the very beginning of the book tends to clear things up a lot.
f.) Doing a "compare and contrast" between things that seem similar (or even if they don't) is usually a good way of strengthening some connections.
Btw, over the last couple of weeks, I've been trying to learn ML almost full-time. In the process, I think I managed to figure out what are the best resources for this and I'm in the process of setting up a website discussing what I've found. I started working on this yesterday so it's not quite ready yet. However, if you'd like to check out ML in 2017, I'm hoping to make the process a lot less painful. You can sign up here if you'd like to get notified when it's ready
Drawing... well, I'm not having much luck with that. Did 6 months of drawing 1 hour a day last year. Think I got a little better. Reddit has a couple of useful groups to follow:
The government looks the other way since building is one of the last big job creators now that the mining boom is over.For example, there is a government body for foreign investment breaches (rules like you're not allowed to buy 'used' property if you don't have permanent residency, but I've personally talked to quite a few Chinese students whose parents did that for them) but it has never once initiated court action .
So everybody profits in the short term - Chinese corruption can move 'dead' money out of the country, Australian government gets to present itself as a job creator. Except young Australians, who have to rent and cannot rely on real estate property for their retirement.
In the long term, the bubble is going to pop and then you have dead cities with empty high rises falling apart (but then again, people have been saying for at least 10 years that the bubble is going to pop any day now and it's still inflating)
 Contains no numbers: http://www.smh.com.au/comment/grey-money-from-china-helps-bl...
NL - Amsterdam up 15% in 2016 [-1]
AU - Sydney up 17% over 5 years 
NZ - up 13% oct 2015-2016 
BC - Metro Vancouver up 31.4% August 2015-2016 alone. BC (not just vancouver) housing prices have almost tripled since 2004. 
No, these new rules the BC government has put in place are not going to work. I've commented on the reasons before here:
While political parties can wallow in the extra taxes garnered from sky-rocketing real-estate prices and sizable party donations they have no incentive to introduce this second tranche.
For example, any party in power in New South Sales will keep its budget in the black simply from "stamp-duty" taxes collected on real-estate transactions.
The Foreign Investment Review Board was and is a farce and that's how the government likes it.
Recent activities on this:
Note ban (Nov 2016) => https://en.wikipedia.org/wiki/Indian_black_money#Ban_on_1000...
Using analytics (today's news) => http://economictimes.indiatimes.com/news/economy/policy/i-t-...
Q: In Canada, where does the money end up and why? A: Vancouver is the preferred destination, by far, because of perceived more relaxed anti-money laundering on-boarding compliance and more importantly, easier access to better schools and lifestyle for children of Chinese foreign nationals.
1. Do what Vancouver did, add tax for foreign property ownership.2. Tighten the checks on origin of money? How exactly, especially when it's coming as all cash? Maybe forcing it to go through a bank, where more thorough check is mandatory per IRS? But then again, banks are not the most trustworthy in this country.3. Other ideas?
This is a pretty large economic tide. Almost twice the size of California's economy, all at once. Ranking it The Fifth Largest Economy, worldwide, if it's worth thinking in those terms.
Given that an approximate number of participating Chinese nationals have been enumerated, if they suddenly disappeared (and ~20,000 is certainly within the realm of possibility), what would a sudden halt of two trillion (legal or not) do to the rest of the world?
(Note that with 16,000-18,000 people doing the moving, it's very large per intermediary)
"The Web App Development Guide for Marketers"
1) Get a database because they are the where all your data will be saved. "You should have a db if you have nothing else."
- MySQL book link
- Relational databases video
- more intro-to-databases links
2) "Fancy having a frontend?" A frontend is needed because your users will want to use your awesome app
- HTML link
- CSS link
3) Do's and Don'ts
- Have forms and buttons
- Don't make the forms too long. Here's some research to show long forms lead to lesser people submitting them.
... and so on and so forth.
I'd love to see some broad evidence that email newsletters are effective. I've experienced no strong correlation myself between the frequency of sending newsletters and sales.
The strongest argument I've heard for email newsletters is, "it can't hurt, so you should do it just in case it helps." But that ignores opportunity cost.
Can anyone (preferably who doesn't work for a email campaign company) give any strong evidence in favour of regularly sending out email newsletters?
Ill start: How We Got 1,000+ Subscribers from a Single Blog Post in 24 Hours https://www.groovehq.com/blog/1000-subscribers
Silicon Valley has been encouraging the latter because it works, to my frustration. One of the reasons people ask for upvotes on Product Hunt is that everyone else is doing it and there are no visible consequences for doing so. (and I've recently found out that people do the same for HN votes on occasion because they assume it's a part of the culture)
Before you go off shoving fistfuls of money at display ad retargeting platforms, I'd highly suggest running placebo tests first. The platforms will tell you how you're converting people into paying customers at a $2 rate, but with retargeting, you've previously acquired their attention in some form -- that's how you're able to retarget them in the first place. The problem is that, particularly with display ads and Facebook feed ads, you tend to cookie bomb your audience and you're not getting a fair assessment of how many people actually stayed engaged with you because your ads in some part.
In nearly all my experience, the actual value of retargeting is never what the platforms tell you it is in their ridiculously misleading CPA reporting.
This is guide is not "a bit basic", it just barely scratches the surface.
We created our product as a group with engineering backgrounds and now trying to switch some to full on Marketing mode and trying to grab anything I can get my hands on.
Blog content: they say writing anything is better than nothing and there's some truth to this, but compelling content is far more important. One thing that can happen (seen it) is you hire some content monkeys to make you "content". The result is bland and reads like stale Cheerios. Google will knock your site ranking badly if users don't like to read your articles.
When I started writing content for this client my first article got more organic traffic than their whole site with hundreds of pages of "content". This continued with each article I wrote until 99+% of organic traffic to the site came from around 10 articles I wrote.
Biggest thing with content is it needs to be good quality. Long form articles with pictures work the best. The other extremely important point is to tailor the subjects to your readers.
If you sell can openers write content about cans and cooking with canned stuff. Write about ways to open cans when your opener breaks. Write about what to look for in good can openers. Write about how canned food is made. Become the one and only website about anything can related. Google questions your readers are likely to type and get links to your site on the top search results for those questions. Even if you have to pay some site owners it will boost your rank tremendously.
Make sure a human visiting your site would think it was well made and google will to. Use cdn's to make it load fast. The latest and greatest TLS certs. Fully verified email addresses linked to you domain with all the bells and whistles. Make your site seem legit enough that users would feel okay using their credit cards there.
Email: make sure the emails you send are things your customers want to get. Know the demographic of your customers and tailor your message carefully. It goes far beyond subject line content, if you annoy your customers your emails will get binned as "promotions" and nobody will see them.
2. Send Emails
3. Build Landing Page or other.
4. 1-3 Repeat and modulate landing page and email template as you get more customer conversations.
Many of the comparisons in the article seem to be similar to that, and with projects that seem far more complex, technically and politically, than what most of us deal with. Other than the fact that all the projects mentioned are called 'trains' or 'subways', and I assume are mostly underground, I don't have enough information to say they are comparable at all.
Sometimes, there is no way for the layperson to analyze the situation on their own.
EDIT: Minor edits
(MTA officials say the Second Avenue Subway cost as much as it did because of Manhattan's "complex underground infrastructure" as well as the fact that the New York City Subway runs all the time , the latter not being a requirement of Paris or London's systems.)
Are these useful comparisons? How can you possibly compare a train running the length of Manhattan to anything in Copenhagen? I want to know how much a new train line costs in Tokyo.
Something connecting all the way from Riverside Drive/Henry Hudson Parkway passing through the 125th Metro North Railroad station. (...and maybe even shuttling to Randals Island, why not?)
It's faster walking than it is to take the buses that run that route. Cold weather means waiting for the buses sucks, and the only time it's worth a trade is when you're carrying something heavy.
Car service in Harlem is slightly schizophrenic, even with car hailing apps and "boro cabs" (because normal yellow cabs don't operate in Harlem, for reasons I still don't understand...).
The multitude of governments and jurisdictions problems is more worrisome however. That kink won't fix itself.
London is far far older than NY, it's also more congested underground and a lot harder to organise logistically.
Blows my mind that the NY subway costs this much to extend.
Campi is literally "fields". We have that a lot for city/town/village names (e.g. Campi Bisenzio, close to Florence). When used for the name of a place, I would just suggest maybe 'meadows' as a better translation.
Flegreo (pl. -i), on the other hand, exists as an adjective solely to describe someone or something "from the area West of Neaples known as Campi Flegrei".
The etymology is of course related to the volcanic activity and has to do with burning. (phlego) in Ancient Greek meant "to burn". In Latin the verb was Flagro.
In Italian we still have these words in use:
- "deflagrazione" (it's similar to the English deflagration, but with a broader meaning of "explosion"; it's normally used as a synonym)
- "flagrante". Literally "burning", but the common meaning is "evident" or "in the act", as in "colto in flagrante", "caught in the act", "caught red-handed".
The greek root, turned to "flog-", is still to be found in some specific terms, especially in medical literature. Flogistico, for example. You have that in English, and it's even more recognizable, thanks to the "ph": phlogistic. It means inflammatory, causing a burning sensation.
Any guesses how far we are from being able to control volcanoes so that they would not pose a threat? Too far, I would guess.
Naples is the urban area with the highest density in Europe. An eruption of Vesuvio would likely kill at least 1-2 million of people and bring Italy and Europe to total economic collapse.
If we could for once think logically and stop all the culture/tradition crap, we would migrate people out of this area. The Balkans are huge and deserted, with lots of areas with similar climate. Spain comes to mind too. Or even Italy itself.
Obviously, the problem here is always the same. You have lots of people living in the worst possible places, but then if something happens is the State (hence tax payers) that have to re-build houses, pay for the emergency and so on. This is a huge moral hazard.
When this happens it is going to immediately change world climate to something very much colder than it is now. It suggests to me that some sort of preparation for surviving in very different climatic conditions than the one we currently experience would be a good investment in time and resources. That said, I'm not entirely sure how we might plan something like that.
I recently have been working on the technical aspects of building a new house for my aging parents. We went with Lutron's Serena shades, because we also used Lutron's light controls and they feature both Nest and Alexa integration. Construction hasn't been completed yet, so I can't fully report on the final result. But, my initial tests have gone well and setup was simple. For reference, the per-shade cost is around $600 installed. Basic shades of similar quality would likely be $150-200 each, so it's a fair bit more but not outrageous.
I considered a DIY approach, but then thought about the primary users and realized an off-the-shelf solution would be best. Even in your own home, it's hard to overstate the importance of reliability and ease of use for your spouse, kids, guests, etc.
Btw, awesome work OP!
I just got a google home and I want to program/add apps(games) or actions(movie times, order a pizza, call x friend on skype, call 911, record & playback my sleep talking recordings and many other ideas thought of).
I can't see why such an App Store isn't available yet for programmers and either Google or Amazon to profit from!!! One that is run like Apple's App Store where things are reviewed and approved, as for me I think these AI speakers are the next big thing like the iPhone.
Will come back to this as a winter project, busy building a coffee trailer this summer https://goo.gl/photos/9rrRAZy7xSprWVnA6
It looks like your code just has a default.
If you see a pattern and you search a explanation for it, you can get wrapped up in the hunt and end up investing a lot of time into a wild goose chase.
Our math profs warned us to do this, because if you zoom out wide enough, there is a pattern in every noise. As a undergrad, i got obsessed with the idea of creating a meaningful divide by zero operation.
The result, if i remember correctly, was a "fractal" cave, interconnected, the walls defined by aggregated infinitys reseeded by the "echos" of all previous caves until the next "digit" of the original seed number is reached. What a useless operation, one might think- but i got obsessed with it, because it generated sequences.1/0 = |1|0/0=1|2|3|5
Some of the results started to look like the fibonacci-sequence(its basically a algorithm mapped to infinity echoing back and forth along the cave-walls after all) and i lost a semester chasing this numeric day dream. :(
Shame on me, i woke up when my math prof zoomed out over some random pattern revealing "patterns". The Truth is, we humans want to see patterns. Desperately. So desperatly it can eat lives.
Still a fascinating read, can fully recommend. But wake up if you what you find eats you.
PS: To double my shame, i did never publish this. So if you venture down the rabbit sinkhole, put a warning sign up.
This page is great, but the wikipedia page is too and provides other related work and coincidences. https://en.wikipedia.org/wiki/Ulam_spiral
This never led me anywhere, for the record.
Has anyone found an explanation since then?
These things do not seem related at all.