here's the thread: https://news.ycombinator.com/item?id=10703194
Which maybe was not on the lines OP intended. :)
The points in this thread helped me better understand team interaction. Especially: A leader is never too busy to listen to a teammate.
Ask HN: What problem in your industry is a potential startup?
It gave me so much food for thought and ended up with a fairly popular essay http://000fff.org/the-problem-with-problems/
Really enjoyed both the article and the discussion on that one.
Record high temperatures
Increased investment in VR
Majority of new businesses to be subscription based
Techno-conglomerates will invest in emerging countries to get them online
ISIS will succeed in two or three more semi-large-scale attacks in the west (ten to hundreds dead, but not 1000+).
Optimism about deep learning will continue, but it will not revolutionize the world within 2016.
The economy will continue to muddle through, neither exploding in growth nor crashing through the floor.
Syria will continue to be a war zone. The peace process will not make significant progress. However, NATO and Russia will not wind up in a nuclear war.
It doesn't matter if it is an alternative to Microsoft Word, a driver for an obscure wifi card or a fps game, people usually get involved in open source not out of the goodness of their hearts but because they are passionate by the end result of their work, software development being only a mean to an end.
So get the word out about the specific problem you are trying to solve with open source. If there are other people out there with the same problem and the technical skill to help they most certainly are going to find out about your effort and try to help.
Categorizing issues by required familiarity with the code ("novice" vs. "expert" levels) and establishing a good contribution and style guide are just a few of the very useful things one could apply to his or her smaller open-source project.
One OS project that I've seen and love is https://love2d.org/. Most of the work is done by a few contributors, but some members of the community give back at times.
For example, as recently as about 5 years ago (when I left), freight prices were sent by brokers on Yahoo Chat to a fresh grad who could then rapidly quote prices when traders needed it.
There's a bunch of things associated with a trade that need to get done - the futures position (it might be worth structuring something interesting, instead of just going for vanilla - the market makers in that space are WAY behind equities/fixed income/currencies), the FX hedging, compliance, etc. Market risk is another are just filled with opportunities - the key is to focus on UX, or they'll stick with Excel.
These processes are not, or badly, automated because the IT departments are large, political animals and the traders (who run the companies and are the major shareholders) are the type of people happy to deep dive into a war zone and have kalashnikovs pointed at their belly in the hope of a 20-30% discount, or who can trek 10 hours in the jungle to meet and charm the extended family of the man responsible for a country's grain exports, thus impressing him and securing a monopoly for life.
Still, since the work is by its very nature extremely human intensive, saving any time from the trader point of view, even for work that is traditionally passed down to the new guys, is very valuable, and the companies make enough money not to need to worry about the size of the bill, if the product is of a good enough quality. Adding reliability is another great angle. Catching a mistake on the FX hedging (traditionally one of the biggest sources of mistakes amongst the less technically inclined) as it happens, rather a few months later when the trade is unwound, might mean a few years' salary saved.
Of course, that would require gaining the trust of the traders. Good luck with that. Half a decade working closely with them should do it... and then you're up against the CTO defending his domain. Trust is way more valued than skill, although both are important.
Great industry though. Meritocratic, fast paced, high stakes, really interesting people. I miss it often.
What in life have you personally been frustrated with? What made it frustrating? What it didn't have to be that way? What would the better version of the world look like?
If you can't think of anything, then I'd get out and live more and study some along some tech-related path. The first will help you understand more of the world is and the second will give you more ideas about how the world could be.
The author of the article was working within a call center and got to see how it could be disrupted, price.
Not a perfect formula, but you can start to see areas that need improvement. Also make sure companies are willing to pay for it.
No sprint, no timer, just time (for a Maker): http://www.paulgraham.com/makersschedule.html
Mendeley lets you tag and organize things in virtual folders, as well as store notes and annotations. The bibliographic info for subsets of papers (eg. with a particular tag) can be exported to bibtex files when writing.
Mendeley is available on all platforms, and syncs across multiple computers. The only catch is that the links to PDFs are only maintained on the 'master' that watches the folder. (If you try to watch folders on multiple instances of the same synced database, all hell breaks loose.)
Pubchase (www.pubchase.com) can also sync with your Mendeley library, and does a surprisingly good job of recommending papers of interest.
papers papers\filtering papers\math papers\languages papers\filtering\IMM papers\filtering\MHT
When I read I highlight using the yellow highlighter, and type notes in Evernote. Evernote is searchable.
I make this 'cloud'y by using bittorrent sync to share across computers.
That's all free. The folder method of holding papers/book/topics is nonideal - a paper can cover 2+ topics, for example. But it works, sort of, and I can still search. I use Everything (from void tools) to search on title or author, and then you can search inside pdfs with various tools.
Mendeley and such were nonstarters for me. Any service that makes me pay for cloud storage has lost the war, IMO. I already have 1TB each from Dropbox and google, more from Microsoft, why am I paying you for cloud storage?
I was having trouble with syncing with Mendeley, but reading the other comments aheilbut worked out the issues, so maybe i'll give it another go. I want to control the cloud, syncing, and file structure choices, not have a program decide those for me.
I don't worry about reading lists - I need to read what I need to read. When I get around to that topic there may be more relevant stuff than I selected back at the time, anyway. YMMV. If I see a paper of remote interest that I can legally grab I grab it, dump it into my file structure, and then rely on search to bring it to my attention again at the appropriate time.
The latter holds my notes, highlights and every piece of information I find somewhere; it can do basically any format, not just PDFs. Additionally it links them up with other things in your database (https://static1.squarespace.com/static/544bf5dae4b0dd27d7018...). This plus the superior search makes retrieval a lot easier than in any other application I have found so far.
Previously I used ReadCube, which was also quite good - unfortunately it doesn't run on Linux, so I had to ditch it. I would love to try out Citavi, it sounds even better than Mendeley, but again it doesn't do Linux. (Note: Citavi carries a fairly hefty price tag, but if you're affiliated with some university, chances are you can get it for free. The other two are freeware with the option to upgrade to premium plans.)
All of these are closed-source, so if you're an FSF-fan, steer well clear. Otherwise: well worth the use of!
They all fit together.
>- recording notes and ideas from papers
Docear's core is a mind-mapping application that is used for taking and linking to different notes - you can take notes using whatever app you want and then add it to the mind map-
> - recording where I got a paper from, ie was it referenced from another paper or found on a particular site?
With Zotero this can be as simple as a click, it adds the current page and document along with metadata and can export it to bibtex or the like, thus tying it to docear.
> - even how to read a paper.
This is not a technology problem, investigate approaches such as SQ3R (https://en.wikipedia.org/wiki/SQ3R) which is the one I use (actually the variation SQW3R) or PQRST (https://en.wikipedia.org/wiki/Study_skills#Reading_and_liste...). I keep a low-tech approach on purpose for this, but there is nothing preventing using Docear for this as well of course.
>Right now I use google docs with sub folders for unread, read and implemented papers. I tend to use Acrobat to highlight sections of a paper that I find important and Evernote for saving more important notes.
Docear is supposed to be used with PDF annotators, it then scans the PDFs and automatically creates the references mind map.
> If anyone has a good system for how to read papers more efficiently, how to store notes/annotations for a given paper, or how to track a todo reading list I'd be very appreciative!
For reading them take a look at those I mentioned before, SQ3R has been useful for me in terms of articles and books. To store things Zotero is very good for anything which is done via browser. Docear is useful for managing different "projects" (could be disciplines, specific papers, etc). For TODOs you can either also keep them in Docear (since it has a mind mapper you can create a new node and add the references you exported from Zotero there, actually Docear scans the library and automatically adds any PDF to a specific node) or use something like Evernote.
So far it has suited my needs quite well, but I don't really annotate them, but I do add a one or two sentence overview, which is enough for me.
the biggest advantage of this is how flexible and easily searchable it is if i'm working with different content types.
Works for pretty much every kind of idea.
Then, just remember.
(have any of you guys tried it? Curious to hear your thoughts)
My entire childhood, I had the sense of a world much bigger than the one I lived in that was just beyond my reach. My mother could give me glimpses when we went on vacation, but I was living years out of touch.
When we got Internet access, I got connected in a life-changing way to that world. It transformed how I learn, how I discover, what I remember, how I connect with people, everything, and all of it for the better. I think I'd rather die than go back.
Nowadays on the other hand, the internet means that a video of the final boss and ending will probably be up on Youtube in less than a week. Probably a matter of hours after the release date to be honest, there's a sort of obsession with recording the entirety of every new game as quickly as possible.
I also miss the existence of rumours and hearsay about stuff in video games. Like, of getting the Triforce in Ocarina of Time, Luigi in Mario 64 or finding Mew in Pokemon Red and Blue. Pre internet and early internet, this sort of stuff spread via the school playground and other places like a weird game of Chinese Whispers or Telephone. It was nuts the kind of stuff that you'd hear, like the the story about the flying pink cow that would apparently take you to the Temple of Light or something.
But thanks to the modern internet, these games now get disassembled about a week after their release, so people know exactly what's included and can confirm or rule out any interesting rumour before it can get started.
Related: People's ability to accept they cannot - and should not and do not deserve to - get everything right now.
I'm currently reading a collection of letters from Richard Feynman - curated by his daughter, who weeded through file cabinets full of them, forwarded from Caltech's archives. One can chart the course of a life by the letters that person wrote. It makes me wonder - when all of us are gone, will someone publish collections of emails from our noteworthy contemporaries? I doubt it.
Digital communications are fleeting. While they can more easily be archived than a drawer full of paper letters, they can also easily be deleted with a keystroke. Email and text communications offer many advantages, but they lack the weight of the written word - and the longevity.
The printed paper format seems to lend itself more easily to tripping over interesting articles that you might not normally seek out. Versus mindless, repetitive web surfing.
And of course, if you wanted to share a news item with someone-- it meant cutting it out and mailing via the postal service. Usually, accompanied with a brief handwritten note. Always nice to receive. Now a lost art form.
...then seeing the screen cursor start to produce characters that I didn't type:
"Yep sure did"
It would be 8 more years before I accessed the Internet in 1994.
I think what I miss is the thrilling self-discovery of communicating in a new medium.
Wow, that looks like a terrible little poem.
Also, being a smart kid but not having any role models or anyone who knew about the stuff I was interested in to help me in the right direction.
Connection to the outside world could've made a huge difference when an abusive home life was my entire world.
Now everyone has either spoiled the movie ahead of you, bitched online about how awful it was, analyzed the trailer and figured out all the surprises (or the trailer itself gave it away), or we're all tracking the weekend box office returns to confirm our choice of movie as the Winning Choice or as The Bomb That You Shouldn't See (But Will Be A Cult Classic in Twenty Years When It's Given a Chance)
Movies also stayed in the theaters longer. Remember when the original run of Star Wars lasted almost an entire year?
I miss less distractions
I sometimes miss the more quiet world, getting lost in my own thoughts, interacting directly with people, idle pleasures of various kinds. Of course, I visit that world with an off-grid camping trip each year. A good week being in a beautiful place with no possibility of connecting to anything is awesome!
Then I get bored, play, hike, explore, talk, relax, and sort of reset. I come back charged and ready to go!
Really, my only regret is not having Internet sooner. I would have done so much more as a kid.
Usually I use a Linux web server, but I have worked at places that use IIS and it is really not all bad. Microsoft had some very good ideas in ASP.NET that were compromised by a few mistakes, and if they had fixed the mistakes instead of creating a new MVC framework every year since then, it would be sweet.
Today I mostly code in Java or other JVM language, and can run a good test environment, if not everything on Windows but I deploy to Ubuntu Linux in the cloud.
This year I started 4 new jobs. 1 at a startup after an 8 year long position that I basically grew up in. There were some layoffs and so I picked up 3 contract positions to try to learn new things and pay bills and junk. I also went from a 10 minute commute to almost 1.5-2hrs... which is a big jump for me. The 3 contract thing worked for awhile until the holidays rolled around, and then everything became too much. I was able to exit one contract gig somewhat gracefully and told the others that I wouldn't be working much during the holidays. Just in time for anxiety to kick in. :P On top of that, currently studying for interviews, which I haven't done, ever.
I think I know how to pace myself better, but we'll see...
Sometime after the summer vacation I panicked since I realized that I would not be able to reach the 6k MRR goal I had semi publicly stated.
Instead of openly accepting this I started making bad decisions to boost the MRR, this in turn led to some (of the right customers) feeling abandond and therfore they churned.
That made me realize I was fucking up and in the end I agreed to part way with one of the wrong clients and now we are back on (an even better) track with our app and vision for the product.
But damn it was a hard realization and some hard months living the lie.
Mine was inaction.
They select top Brazilian students and give them scholarships. They usually choose by assessing the candidate's academic and professional track record, besides considering their personality.
They usually select very, very, very impressive young folks.
Fundao Estudar's alumni are really strong. If you are approved, the network you will be part of is a much better gift than the scholarship itself.
For instance, the current CEO of AB InBev (the company which is the owner of Budweiser - among other beer brands) is a former fellow of Fundao Estudar.
Actually, he is won one of the firsts fellows and won a scholarship for an MBA at Stanford.
Every year, some candidates apply for MBA's/LLM's scholarships at top American universities and for undergraduate's scholarships.
I applied for an undergraduate scholarship in a Brazilian university.
I must emphasize that I did not need the money, what I really wanted was to be part of their network.
Actually, this year was my third try. I also made an application in 2013 and 2014.
Previously, I was eliminated on the semifinal phase. Going until this phase means that I was among the best 40/50 out of 80,000 candidates.
This year (2015) I went to the final interview with other 19 students.
On the last interview, there were 20 candidates applying for an undergraduate scholarship for a Brazilian or American university. They chose 18 and cut out me and a girl who was going to Yale.
I was really sad when I received the result because I wanted a lot to be part of their community. And this year I was really close of finally achieving it. I almost got it!
Being refused three times by Fundao Estudar is my greatest failure so far in my life (I have 22 years old).
This third elimination happened in the middle of July.
It has been awhile and I have been reflecting on it a lot lately. The feedback they gave me about why I was not chosen is also clearer now.
The lessons that I took from this experience are:
1 - I am really glad I tried. If I had not tried I would be wondering for the rest of my life what would have happened. I am quite happy that I will not regret not trying.
2 - Every application process I tried, year after year, I noticed that I was getting better. I was better in interviews and my professional and academic track record were improving on the last years.
3 - Due to the application process I met a lot of people and made some new friends. And they are very impressive guys. Hence, my network has been enhanced.
4- It is clearer for me now my strengths and my weaknesses. Knowing what I need to improve is great. Getting this "no" is an excellent way of reducing the asymmetrical information between my perception about myself and what other people think about me.
5 - Self-knowledge is something weird.
Really difficult to measure, usually with little or no science when peopletalk about it and absolutely important.
This "no's" that I received were the best (and the hardest) way I have ever experienced to improve my self-knowledge.
6 - The process did not change much from 2013 to 2015. I was also older on the third try. But a funny (and IMHO cool) thing is that my preparation was better and harder year after year.
I put a lot of effort every time I tried. And this increased. I read books, searched for info on the Internet, rehearsed, created a notebook for self-assessment and spent many hours thinking about how I could pass.My preparation was definitely over the demand.
7 - I have a few idols. Two of them are Americans: Michael Jordan and Bob Knight. Jordan used to say: I do not accept not trying. And bob knight used to say: the will to succeed is important, but more important is the will to prepare.
I am in peace with the philosophy that two of my idols used to preach.
8 - I took lessons from it.
Hope you guys have enjoyed reading this long answer. My English writing has been a little rusty, sorry for the minor mistakes.
For your productivity and health, it's important to unplug and enjoy holidays - http://qz.com/485226/this-is-what-365-days-without-a-vacatio...
For those that replied "Yes" wonder if the majority are from the United States, being it's the only "Rich Country" that does not require employers to provide paid vacation time - http://cepr.net/publications/reports/no-vacation-nation-2013
Edit: As for what I am building, I am working on a graphics editor written in Common Lisp, partially as a way to verify others' claims about Lisp's efficiency, and partially to see for myself how difficult it is.
My family is consuming my time and/or my hardware currently.
I have gotten to the point of buying a new laptop today. Just so I get my macbook back.
On the plus side my family seem happier. We're all well fed and I've socialised more this month than I have all year.
Had my last exam before Christmas so I've got a lot of catching up to do from a busy term without side projects.
However, looking at it objectively, I think to pull off the execution of this project it will probably take 1-2 months of fulltime work. Anyone know where I can get, say, a hundred grand to help me start a business... If only.
Also seeing Star Wars and watching a few videos and a little computer gaming, but mostly programming.
The problem is however not so much if it's technically feasible but why users should contribute their CPU time and bandwidth in the first place (particularly on mobile where both are scarce).
I suppose enterprise environments might be more amenable to such a system than consumer applications, particularly because most office PCs don't use most of their CPU time anyway. The question is what exactly to do with distributed computing capabilities in such environments.
Two: If the interviewer is asking you to do the impossible i.e. move, relocate, have super long traveling distance you say: "I have a short term plan formulated, and working on the long term one. It's an overcomable problem". You're an engineer, you can solve problems.
Three: Impress people with your CV. Write it in LaTeX. Grab some cool template here: https://www.sharelatex.com/templates/cv-or-resume If you list graduate courses in the left CV column, then format the course names as hyperlinks linking course descriptions.
Four: Giving your CV to a future employer, despite it's good-looking, full of links, and with impressive information, it won't do unless you have someone at your past uni which you can name as your reference. That person knows you, and knows the course descriptions, so that when a future employer calls, the information is concise and complements the CV.
Five: Throw yourself in deep waters (figuratively). Problem solvers work best when solving problems is literally part of their strife.
Keep working on your projects. Pick the 3 CS subjects closest to the sort of job you're looking for and read the related materials to keep up a minimal theoretical background.
Interestingly from the Commercial side, it is very different. If you've got a single board computer as part of your product and you are a going concern with the necessary legal NDAs and what not in place, the manufacturer will send one of their engineers to sit in your cube an pair program with you until your system is running the way you want. They will create custom releases of their binary blobs that do the things you need them to do in order to make you successful. That is so that your product ships and you start ordering a million a month of their product.
On the flip side there aren't too many really "open" SoCs, not like the old days where the data sheets told you everything. So things are a bit more challenging. I had hopes for the Zync series from Xilinx as they had the potential to be the basis for a good "common processor" base, a dual ARM 9 enough FPGA fabric to make a classic frame buffer, etc. But the number of people who want that sort of system is measured in the thousands, not the millions. No way to make a living at it, no way to sell it for what it would cost to support.
Intel has been bending over a lot however in order to try to take share from ARM. So they will talk to hobbyists about their smallest computers. The Galileo, compute stick, what have you. So there is an opportunity there, for the moment they are aligned with anyone trying to give them exposure.
You have to look at the lineage of how things evolved to the current state - on the PC ecosystem, everything is already on enumerable buses (PCI, USB, etc) with standards (PCI, UEFI, etc) describing how it's all supposed to find what device is connected where and have it all work together. The incremental cost of opening that up to the public is thus fairly small since you need to build your platform to adhere to the standards that are already there anyway. That's how you get to being able to boot a kernel image on a random system or insmod a random driver you found and expect it to (mostly) work.
In the SBC/Embedded ecosystem, there really aren't any standards. Since internal topology of each SoC is different and the pace of new SoC releases is so high, there's no time for standardization - you throw in random IPs from bunch of different vendors, figure out how to connect it all together, and get it to the market. In this scenario, having something documented is actually a negative thing - once something is documented, people expect it to work the same way going forward. You can hide a lot of hardware deficiencies in binary blobs, something that's very difficult to give up. Thus, there's a huge disincentive to provide full hardware documentation. I'd imagine that in some cases, for licensed IPs, the SoC vendors may not even be allowed to do so even if they wanted to.
Things like DeviceTree is trying to nibble around the edges of this problem, but given the current state of things, it'll be a while yet as a lot of the building pieces doesn't even seem like it's in the picture yet.
Generic OSes simply have the wrong sort of philosophy for this. Microcontrollers, like the AVR or ARM Cortex-Ms, tend to provide an environment that gives you more tools needed to take advantage of the processors in a reasonable manner. They provide the hooks for interrupts so that you can service IO when it comes in, they provide network stacks and filesystem libraries that you can use if your project calls for them, or ignore when it doesn't. Because of this, you end up in situations where programming for an ATTiny4313, with 256 bytes of RAM and 4 kilobytes of flash, is more enjoyable and rewarding than a system with a million times those resources.
A lot of this can be blamed on the documentation, or lack thereof. A lot of the higher end embedded devices -- like the broadcom chip in the pi -- don't have nearly the documentation available to the ordinary user as the smaller chips. As a consequence, users have to pore through the tiny amount of documentation that is available to guess their way to the answers, further ensuring that you'll only get a few ports of operating systems that really don't exploit the power of the chip they're running on. You just get a generic experience with a generic OS.
The solution is to hack. Go deeper than Debian, farther than FreeBSD. Bug manufacturers for the tools needed to expose the dark corners of the chip, to get register maps and interrupt handlers. We need good real-time solutions that the common person can count on. To get things not so ugly, we need the opposite of the tools we have now. We need to lay these chips bare, because the current path just isn't sustainable.
Getting software right is bloody hard work! Especially with ARM where you need to redo a lot of things for every new piece of hardware.
Most companies (and most PMs/engineers) I think perceive boards as the end-products themselves, and any software development after the initial release (and maybe some bug fixes) is more of a burden than value add. This attitude needs to change, because it results in very-very few boards actually being used to their full potential. What good is a great hardware if nobody can make it work? Maybe this will change, but need good thought leaders inside companies to make that happen.
I see that upstreaming is often not considered, or couldn't be done. The quality of code is just awful, because that's not a design goal. Being part of an ecosystem, helping your future self doing a better job (not needing to start from scratch every time if things are upstreamed) is not part of the thinking for many. These things are (or thought to be) outside of the PMs responsibilities.
Resource constraints come in a lot, many companies try to support way too many products, and end up with a level of "barely" making it work, which is good enough for many traditional customers. Doing a good job needs a lot more resources. I remember reading that RPi spent about $5million worth of development on just the Linux support. Can't imagine lot of other companies putting that much into any single product.
And there's a lot of the traditional "trade secret" thinking. Lot of places are more afraid of losing sales doe to being copied than not selling boards because of lack of interest. The main goal is never really "enabling the customer/user" or giving options, but the first thing is protecting the IP because of the thinking stuck with the ways things use to work.
Also, the "software ecosystem" is highly fragmented, all projects rely a lot on volunteers, and require a lot of specialized knowledge. I don't know if it's even possible to bring people together, but whoever would achieve that would do a big service to both sides...
These are just some thoughts, I'm sure not the whole picture (and definitely, definitely do not reflect the opinion of my employer:)
Specifically: Because until the latest incarnation of both the BeagleBone and RasberryPi, everybody was running hacky kernels with bodge after bodge of garbage layered on to make things work.
In roughly the last year, both the Raspberry Pi and the BeagleBone black can run relatively clean versions derived from Linux mainline (Debian in my case).
Once the BeagleBone got off of the disaster that is Angstrom Linux, the number of BeagleBone's around me shot up like a rocket.
Even if the sources are available it 'd take many man-months of engineering work to get those compatible with mainline HEAD and even more effort to get to the coding standards required by the kernel maintainers. Manufacturers/OEMs/ODMs don't care because it won't improve their bottom line to have a current kernel (at least not until a customer wants to run latest Debian with systemd and udev, which carry certain minimum requirements on the kernel). The Linux kernel community already has too much work on their hands and I don't see any major company sponsoring the couple millions of $$$ that 'd be required for integrating work.
Just look at the myriad of linux-2.6.24 forks. Android handsets, SOHO el-cheapo routers (people still ship 2.4.x kernels for these, LOL), gambling machines (no joke, I actually own a real, licensed slot machine running 2.6.24!)...
Thus most of the companies involved has a "ship it and forget it" attitude towards their products.
The PC is a very odd duck. IBM was a late comer to the micro/personal market, and they did so using off the shelf components (except for the chip handling the initial bootup, better known as the BIOS).
Thus it was possible for other companies to clone the PC using the same components, and at the time it was possible to do a clean room reimplementation of the BIOS to get around any copyright claims (back then there were no software/code patents).
So once those BIOS reimplementations started shipping, the PC market exploded with competition. This in turn drove prices down.
Another thing is that the IBM design was in a sense a throwback to a earlier "era". While most micro computers sold were pretty much single board computers (possibly with a few expansion ports and a single edge connector for ROM cartridges) the PC was more like the Altair 8800. Except for CPU and RAM, everything lived on ISA bus expansion boards.
Thus you had a initial flexibility that very much stayed with it to this day (and was massively improved when the ISA bus got replaced by the PCI bus).
I'm not trying to be ornery, I just don't understand what is ugly about Linux, or what is specifically ugly about using Linux on these systems?
I am planning on supporting Samsung SoC's and will start of with TI AM335x Sitara series and FriendlyARM's NanoPi2 which contains the Samsung SoC S5P4418
I am running behind OEM/ODM's for funding me for development of their SoC's and nobody seems to be interested except FriendlyARM. What do you think I should be doing to get some funding for this?
It's kind of a roller coaster of emotions. You can go from feeling like a wizard because of all of the shoulders you stand on for very little effort, to feeling like a dunce, because some driver doesn't work, and you have no idea how to go about implementing a solution.
If you're not fond of cPanel and you want to use multiple VPSs you should definitely do it with docker, one container per client and then map them to a CNAME (e.g. www.website.com), you can use AWS, DO or any VPS provider with a mature API to automate the process, use logspout and ELK for logging and you're all set.
You can either choose to scale vertically (more servers) or horizontally (increase CPU/RAM).
So rather than finding one solution, you might think about putting your client's sites into different buckets. Some sites might be a small one-off that you're starting from scratch, and those might fall into the shared hosting bucket. Some might be bigger or use a different stack than your typical site and those could fall into the VPS bucket. The remainder might fall into that unknown/unknowable zone, so it'd be good to plan out a way of managing third-party or client-hosted sites.
I think it's a rare thing for an agency to start off a new client relationship with an entirely blank slate, it'd serve you well to be flexible.
Sites that are 100% static content co-exist in a single jail but, other than that, it's one jail per customer (whether they have one or multiple sites). We would separate multiple sites belonging to the same customer into multiple jails if there was a good reason to, though.
Another jail runs nginx as a reverse proxy in front of the jails.
The big concern for me was the compromise of one site affecting other, unrelated customers. That's why we've separated them like this.
But the choice between VPS and dedi depends on several factors (traffic, requirements, compliance, projections, etc). I always recommend starting small unless the business whose website being hosted is already established.
As far as the hosting platform is concerned, you should always seek to leverage a control panel. This is one of those things you wish you had only after you're stuck without one (we support panel-free servers too). Your clients don't need to contact you for every little change they need to make.
2) The first re-use of a rocket booster to launch a payload into orbit.
3) A commercially exploitable use for Graphene is found.
4) Some genetic condition is completely cured in mice using CRISPr techniques.
5) VR/AR actually ships in underwhelming quantities.
6) Power companies sue to block people from installing whole house batteries.
7) Biometric firearms see widespread adoption.
8) Drone "mortar" shells (single use drone carrying a shrapnel grenade) see use in the battle fields of the middle east.
9) Google has its first wide spread layoff not associated with an acquisition.
10) Nintendo ships a fun to use game console.
1) Ad blocker usage doubles on the desktop in Western Europe and the US by the end of the year.
2) Websites locking out users of ad blockers becomes routine, rather than exceptional.
3) There will be at least one successful legislative attempt to outlaw ad blocking, in Europe.
4) Long shot: Apple ships a minor (+0.1) update to iOS with ad blocking enabled by default. (Blackberry might do the same, but it would not be as consequential as Apple doing it.)
Bookmark so you can all laugh at how wrong I was!
2) The majority of the population still does not know / appreciate what it takes to make a website
3) Computer science grads are pissed because they can't find jobs despite constantly reading news about how there is a shortage of programmer jobs
1) React.js starts to lose popularity due to it's ultra complex tooling ecosystem. People want to feel like what they learn will be still be useful at their next job. The React ecosystem doesn't provide that; the React ecosystem tends to burn people out.
2) Smaller frameworks like Vue or Riot takes the spotlight.
3) Angular 2 is a hit thanks to its "batteries included" design, which will appeal to React burnouts.
2) There will be considerable progress made on the Winograd Schema challenge.
3) The (unconstrained) Turing test won't be "officially" passed until October 2017 though (it will actually be passed in July or August, but the news won't leak until October).
4) Some people will continue to insist that Deep Learning is nothing different to what was being done in the 1990s. At some point someone will get frustrated enough with this to blog everything that is different now.
 Specific enough prediction?
2. Multiple new operating system projects announced, targeted for unikernel VM deployment
3. Emerging AI techniques commercialized for creative applications, e.g. a new wave of selfie apps, professional art or music tools
4. Distributed apps using blockchain protocols start making small ripples
5. Software and devices that successfully bridge the mobile/desktop gap are demonstrated
6. VR/AR devices ship, but demand remains modest and mostly in professional niches
I would have said something about the economy and finance too, but this is a tech predictions thread. So I'll go for a tech-economic one:
7. Trends turn against one or more of the current leading social networks, as a bold newcomer finds an opening
8. Bubble mania in the Valley peaks and shifts towards panic as key macro indicators start sagging
1a) Release of the first non major brand (JD, Case etc) autonomous machine.
2) Drone see uptake in precision spraying applications. Although this will be on small farms. Broad acre will still be too hard for a while yet.
3) NDVI becomes one of the most commonly used inputs pre seeding and for nutrient applications.
2) VR successful on a small scale, no successful AR products
3) 2016 is the year of drones
4) Marijuana startups heavily funded
5) More JS frameworks come in and out of vogue
5) Neither native apps nor web are going anywhere
6) YC's average founder age increases to slightly over 30
7) Apple releases a tablet/laptop hybrid which flops
8) Property values decline in the Valley yet relentlessly surge in SF
Digital Ocean gets acquired.
Google Fiber buys Cincinnati Bell.
IoT still hasn't gained much traction.
Bitcoin suffers 51% attack.
Archaeological evidence of human life found on Mars/Moon from some time > 12K years ago.
4G meshnets become popular in developing countries.
Soon thereafter, fully free software ARM handhelds could take foot if common tooling accelerates both innovation and stability. So..
2. Base-level (free) software [and possibly hardware] (frameworks, languages, and developer tools) will coalesce and clear winners will emerge, while side-level interests will continue within ongoing and offshoot communities whose work then funnels back for mainline user adoption.
Bonus (re: emergent winners). Communities will pick targets and sets their aims based on principle more so than popularity.
1) FinTech will start to shine through, the first consumer banks built using modern tech will open to customers in Europe (London FinTech scene is strong, Zurich has ex-Googlers and strong finance, or Frankfurt but they are currently trailing - my bet is on London). These banks will experience very strong growth, the question really is: Will they go with it or go for acquisition
2) The "Family Plan" will emerge as a new sales market in most of the established consumer products, with Dropbox, Google, and others all building strong offerings for managing the product use and sharing of a group of users. This is ground-work for centralising both "family" and "home", and lays the foundation for the command and control of IoT over the next few years - it is how the big established players stay in the game.
Neither of those predictions is quite there, the leading new bank is Mondo https://getmondo.co.uk/ but it's in beta, is currently iPhone only which limits adoption in the UK, and the benefits of the new tech hasn't yet been fully realised.
And I'm not yet seeing the "family plan" head this way but I'd be surprised if the penny doesn't drop somewhere and this be the path taken. Control of the home is control of the family, and vice versa... the family want tools for this better than the ones they have today.
An entirely different thing I'd like to see exist but do not know of anyone working on at all:
3) A dating app that acknowledges the hook-up culture that seems to be growing in the millenial generation of users, and instead of hiding that under the carpet uses it's data to encourage responsible tracking of STDs and other risks.
That one is inspired by news this morning on Gonorrhoea becoming immune to antibiotics http://www.bbc.co.uk/news/health-35153794 . The app should function similar to whatever the core function of Tinder is, but allow tracking of whom you've done what with for the sole purpose of allowing notifications of STDs to be quickly disseminated to people who may be exposed.
It may be infeasible, I don't know what the willingness of those who do hook-ups is to track risk-related activity and test results is. The core idea is "disease tracking in social networking and dating apps".
2. Major browsers will start experimenting with warning the users that they are visiting a site that does not support HTTPS (although I think that they won't be adding that to stable versions until 2017).
3. Google+ is going away for good.
4. European Union makes the progress in uniting as a single market to fight against geo-blocking.
5. Self-driving cars are still not ready to be commercially available.
2) Trump will die in popularity after taking a controversial stance too far
3) We will see scare pieces on local news stations about Oculus technology being for perverts and losers and it is destroying the fabric of society
4) Expansion of TSA pre-check program, including more options that cost more money and require some new hardware, like retina scanning, at the cost of billions of dollars in new machines
5) New NSA privileges are granted by congress by silently slipping into some entirely unrelated bill
6) Obama will make some sort of grand, popular gesture (ala taking credit for Bin Laden kill) in an attempt to make Democrats as a whole look better to improve Hillary's chances (edit: actually, this is more likely in 2017)
7) Standard of living will continue to decline for Americans as compared to other first world countries
8) Police corruption/tyranny will continue to be a hot button issue and we will continue to see a push for body cams throughout the country (thank god)
9) Some scandal will be unveiled or manufactured around Elon Musk
2. Windows 10 Store will remain totally unused. Microsoft won't say it is a fail but well, everyone will still prefer to use old x86 applications
3. Self-driving Cars not ready
4. Still a very low number of electric cars sold, but their price will drop making them probably a good investment in 2017/2018. In europe the situation will remain the same as today (almost no one use them)
5. Some move by apple in the low-price smartphone market. This time really low-price, they won't go for a 5C like before but for something else more simple
6. Touch screen laptop still ignored by most
7. Arm laptops hit the consumer market with linux or android on them
8. VirtualReality devices are a flop because of their costs (both for the devices and for the pc you need to use them). Almost no AAA game will support them
9. Vulkan will be a great revolution in gaming
10. All companies going back to native development on mobile instead of hybrid solutions.
11. React will slowly fall losing against Angular2
- A Tinder not for hookups (perhaps even for friends - see below)
- A small-scale Meetup.com for exchanging knowledge
- Time banks take off
2. Digital rights weakened all around.
3. Analog media makes a comeback (books, art, film, letters, photos, vinyl, etc.)
4. Phones become uncool.
5. Donald Trump wins the election. (Democracy is a technology, or at least it is in Civilization.)
1. Development for proprietary mobile platforms (such as iOS/ObjectiveC/Swift and Android/Java) will drop in popularity.
3. The "App Store" will start to loose its relevancy.
4. Mobile (vs. web) usage continues to grow.
And for business:
5. Social networks coming out of Asia (China, Japan, Korea, SEA - such as WeChat or Line) becoming globally popular.
6. Crash/collapse/accounting fraud scandal in a major internet company. (No idea for specifics but look at the ride subsidies of uber for an example of something that can go wrong).
7. Xiaomi starts selling in USA and Europe with considerable success.
There's another saying as well, which I think might be a billg thing (but don't quote me on that) which goes something like this (paraphrased a bit probably) "People tend to overestimate the magnitude of technological change in the short-term (2-3 years out) and under-estimate the magnitude of technological change in the long-term (say, 10+ years)".
I find that's largely true. Next year, most things will be mostly like they were this year, just incrementally different. 2017 will be mostly like 2016, and so on. But somehow these amazing things tend to sneak in there just the same...
I guess I still didn't actually make any predictions, did I? OK, find, you twisted my arm. I'll take a stab at some:
1. Wikidata will continue to grow in maturity and scope and will be a terrifically import piece of the Semantic Web as it continues to grow.
2. People will continue to insist that the Semantic Web is dead, and you'll see a continuation of something like an analog to the old saying "once it works, people stop calling it AI". Nobody will ever say that the Semantic Web has arrived, but we'll be using Linked Data and related technologies (although perhaps not the RDF/SPARQL stack)
3. I'll predict that at least one new (or new'ish) probabilistic programming language will gain some major traction in 2016.
4. Hadoop / Spark / etc. will continue to grow in the enterprise and start to move beyond POC's and demos.
4.5 - but most businesses are still just spitting in the wind, stumbling in the dark, etc. when it comes to actually becoming more scientific / data-driven. But you'll see more "stuff" (tools, technology, methodologies) etc. promoting the use of scientific thinking, analysis, etc. in business decision making.
5. Zeppelin will grow in popularity as people write additional integrations / interpreters for it.
6. MOOCs will continue to eat the foundation out from underneath traditional education. It's arguably already the case for programmers and people like us that a traditional university degree isn't all that important for many jobs; and more employers will start really looking at certificates from things like Coursera classes and hiring people without those traditional degrees.
2. Mozilla and MS will abandon their own html rendering engines and start using WebKit.
3. The eye-balls as revenue model will fail. Ad-supported magazines, video sites and many more will become subscription-based instead.
3.5. As a sad consequence, Google will lay of A LOT of personnel. Facebook too I guess.
4. HiDpi 3200x1800 screens will become the norm. Tablets will make specialized devices like Kindle obsolete.
5. Bitcoin will not take of. Visa, Mastercard or a European bank organization will launch a crypto-currency that might take off. Critics will complain that it won't guarantee any anonymity.
6. Multi-threading and multi-processing won't go anywhere because it is to hard. A new language might be created by any of the big companies promising to make parallel processing easy, but it won't, and the language won't be adopted.
7. I predict a lot of health and self-improvement tech being marketed. Like do it yourself genome sequencing, apps to monitor your stats and help you live more healthy.
8. GPU:s will be equipped with chips to support raytracing and raytracing-based games will be released and they will look completely mind-blowing.
2. First use of railgun in Navy in active campaign
3. More car companies go the battery powered route with newer models
4. It becomes even easier to make an app on the phone, but the landscape continues to fracture on the phone OS front
5. someone comes up with something interesting using a small computer like raspberry pi zero and 3d printing
6. Encryption for general population gets better with some really great app.
2. Social networks withdraw more APIs to protect not only privacy but their own business interests
3. Privacy becomes more annoying to lawmakers
4. Lawmakers in the US vote in more surveillance powers to force social networks to comply with their information demands
5. Lawmakers in the EU create new laws to constrain their citizens' data to EU countries
6. Distributed social networks pick up traction in both the EU and US in response to #3 and #4. One will be promoted by a major player, probably Google as a play against Facebook.
2. Google becomes highest valued tech conpany before Apple
The tools make it really easy to put together cool projects for 2016.
Developing iOS 8 Apps with Swift: Stanford course with a great professor that focuses on learning Swift with a heavy focus on the iOS SDK. Really great for getting the fundamentals for iOS programming, and the course assumes a solid programming background (e.g., MVC).
Book: Swift Programming: The Big Nerd Ranch Guide (Big Nerd Ranch Guides). This looks like it just came out, and I haven't read it, but I did read the authors' Obj. C Guide and their iOS Guide, and both were great, so I'm assuming the same for this one.
0 - https://itunes.apple.com/us/course/developing-ios-8-apps-swi...1 - http://www.amazon.com/Swift-Programming-Ranch-Guide-Guides/d...
But, if you really want one, the definitive choice is "The Swift Programming Language", which is free: https://itunes.apple.com/us/book/swift-programming-language/...
I've really come to like Swift as a language, since it's concise and simple. The iOS SDK however is not so much fun. Learning the iOS SDK will take a lot of time before you develop routines to approach your problems. I'm still learning about strange behaviour from the SDK and getting frustrated by it every time I use it. It's important to keep things fun, look through the trending Swift repositories on GitHub, follow #iosdev on Twitter and /r/iosprogramming, you will learn a lot if you keep up to date with these sources.
Some of the habits I've created, which might be handy to other people:
- I manage dependencies with CocoaPods, with no dependencies residing in my repository.
- I don't use storyboards or xib's at all. Everything I do is with SnapKit. This takes some time to learn, but it greatly improves the diffs and overview on how the ui elements are constrained and set up.
- I use API endpoint enums which get called by an API Handler, which uses Alamofire to execute the api requests.
- I try to use as many tools provided by Fastlane. Especially if you're developing many Apps, or incrementally building an App, releasing a new version every two weeks: automate all the things. Otherwise you will waste so much time simply waiting for a process to complete.
Side note(if you're interested):
I'm currently developing a tool called Evans(will write a blog post as soon as it's reasonably finished), which performs all kinds of routines automatically. It for example listens to GitHub comments like '@evans screenshots' on a pull request. Evans then emits a request to one of the build slaves, which retrieves the branch, builds the project and runs the screenshot routine, puts the screenshots in an s3 bucket and posts a link as a response in the pull request.
 - https://developer.apple.com/library/ios/documentation/Swift/...
 - https://itunes.apple.com/us/course/developing-ios-8-apps-swi...
 - https://github.com/SnapKit/SnapKit
 - https://github.com/Alamofire/Alamofire
 - https://github.com/fastlane/fastlane
Inactive projects don't matter while talking about it can give an impression that you were struggling for way too long didn't get far with it.
Sidenote: please clearly explain what you're doing on top of the page (above "the fold"). Each section should be explained in writing, not just a video.
Including "for investors" section also looks incredibly weird to me...
Or redesign CarouselApps.com and spread your new product on the homepage and move older stuff in some archive type page.
Pointing to the marketing site for this new product and making no mention of the old business or past attempts will better guide investor interest. You can explain in the story how you got to where you are. Overall though, the pre-seed timeline is way less important than the product you have today and the traction/growth that's in progress now.
As for the reason, it's normally trying to get money out of them, or just to take a website offline for a while (e.g. the DDoS is from a competitor).
Most attacks will continue until you can prove that your no longer affected by them and can clean out the dirty traffic, this is quite expensive to do though.
I won a hackathon with this concept and began to pursue it as a startup a year ago. The market is big, and people love comfort foods. Also, today's food delivery startups mostly focus on healthy options.
We did customer validation in Cincinnati, OH, and people seemed very interested, but we don't have the population density spots to run it here. I also don't think it'd work in New York (too many fast $1 pizza joints), or Chicago (people mostly want high quality and are willing to wait). We think it has a lot of potential in the bay, where SpoonRocket, Caviar, Sprig, etc. have paved the way.
Side note: Domino's unveiled their own cars of a similar concept in October. http://fortune.com/2015/10/22/dominos-pizza-car/
2) Uber for fixing people's computers/laptops. Plenty of us are good with computers and could easily fix most problems, while 90% of people can't.
Real-Life Megaman Battle Network