Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: What do you think is the next Technology worth mastering ?
59 points by Murkin on Dec 31, 2009 | hide | past | favorite | 95 comments
I have friends who got very deep into iPhone dev a few years back. Now, they are considered masters of their craft, have managed to build a good name and connections for themselves. (And monetize on it).

Now that I have lots of spare time, I decided to delve into something (very)new. In the hope that two/thee years down the road, I too, can position myself in the same way in a new field.

So what do you think is most worth learning ?



This is an interesting question and one I've been pondering lately as well, not from an individual perspective, but from a management perspective for a small team of developers at a web app/design shop.

I obviously have to be more conservative than you because I have to consider short-term profit as well as risk. So in order to get responses that are along the lines of your question, I'd like to ask HN readers: what do you think is worth concentrating on in 2010 that would lead to good short- and long-term results for an agency specializing in website and web application development, often for relatively small projects?

To answer your question, I think you need to do some thinking about what it is that you want to do. You have to start with something that interests you. Your friends chose something in the realm of mobile development: does that appeal to you? Perhaps they also focused on games: does game programming appeal to you? Someone below suggested "data mining" as a field to focus on. I can say for myself that although I am somewhat interested in this, I am not anywhere close to as interested as I'd need to be to devote several years to it.

It also has to be something with a good chance of paying off, if you are interested in the money side of it. Your friends chose something that was backed by a huge corporation with a proven track record of creating successful devices, so although they ran some risk - the iPhone may not have been a massive hit like it was - they mitigated that risk by choosing something with very good chances. However, any choice that relies on predicting what will be successful in the technological realm in two to three years is bound to be risky, especially if it is "very new" (and thus unproven).

Here's a shot at it: focus on mobile web application development utilizing HTML5 features. Google believes "the web has won", and I agree. If you get really good at building browser-based applications for mobile phones, I think you'll do well.


If you aren't already doing it, try making your specs human and machine-readable, using a tool like Cucumber. This is a process-oriented improvement (not a technological one.) The $ benefits come from decreased status-checking overhead and transparency in the functioning of the code base.

Similarly, invest in testing technologies for the front-end. Many web-app shops only focus on testing the back-end, but as front-end logic becomes more complex, the advantages of testing become more fortuitous.


Excellent answer, thank you.

This is one of the main fields I have been looking into. The idea of iPhone/Android/Blackberry/etc app, sounds absurd to me with the Internet-Everywhere movement.

Uniform web hosted apps are surely to be here soon.


Pro-tip: jQuery is light and fast enough for mobile and provides clean abstractions.


+1 for Data mining, concurrency, FP.

Another one: HTML 5 and related.

This probably won't make as much of a splash as AJAX did a few years ago but some of the new things you will be able to do in a web browser present an opportunity to improve on older web apps: e.g. location and gravitation APIs, web sockets, multi-file uploads and drag-and-drop.

Biggest problem with HTML 5: it's going to be years before complete market penetration - but FF, Chome, and Safari all have much tighter upgrade cycles than IE. For intranet stuff or anywhere you have a reasonably captive user base, why not offer better features to your users if they upgrade? Side benefit: it's much more fun to be coding for the future than for the past.


AJAX originated as an obscure hack put in IE by a Microsoft employee. The only reason it made such a splash later is that other people tried it, found they could use it to do cool things, and then spread the word.

HTML5 could make just as big a splash. Someone making local storage work seamlessly in a large webapp would be noteworthy, and take away the biggest problem with Chrome OS: can't work offline.


s/someone/google


Configuration management ala Chef, Puppet. The old way of Sysadmins hand maintaining configuration files on individual UNIX machines simply won't cut it in the era of easily provisioned ephemeral cloud resources.


This is already the reality on the ground, not 2 years out


concurrency models: erlang, scala, clojure, Haskell

Parallel execution, map/reduce, hadoop, noSQL datastores;


This will be a niche area for few very good programmers. A general purpose developer should not risk this. If it does not pan out, then you've wasted a lot of time.


it's not going to pan out because we're going to roll back to single cores?


Frankly, the vast majority of applications don't have these classes of concurrency, capacity and computational concerns and can be adequately scaled in the traditional fashion.


Exactly what type of application are you referring to? Most server software does a lot of work in parallel. Many computationally complex algorithms are parallelizable to one degree or another. Video games usually have a lot of parallelizable number crunching, analytics very often run quite slowly, UI code requires at least a couple threads of execution, on and on goes the list... it's time to adapt the idea of traditional to include more than sequential programs


I presume that most applications that most people use most of the time do not need to scale. Most websites don't need to scale, most business apps don't need to scale, and most userland applications don't need to scale. Single-core sequential programs do just fine on comparatively resource constrained hardware for "everyday" applications. with the exception of maintaining a separate thread for UI, the examples you gave are edge cases.

I am into game dev, music dsp and highly concurrent web application development... but I recognize that these are all fringe activities.


parallelization is not only about scalability, but also performance. If a task takes 20min and can be parallelized 4X, it now takes 5 minutes. There's only a certain portion of each program that is inherently sequential. We're at 2-4 cores now, soon it will be 8-16. I'm not really in a position to comment on average joe computing... but my social circle runs PCs down to a halt regularly. If I had 8 cores and all of my apps were setup to exploit it, I'd be a very very happy man and I don't think I'm in a minority. There's analytics on large databases - every enterprise does this. There's gaming - imho not a fringe activity at all. Exactly what are these day to day applications? CRUD web apps with no analytics? word processing which certainly do parallel processing of spellchecking etc? firefox? photo editing? I honestly can't think of a program worth writing that couldn't stand to use multiple threads of execution - even gedit polls files for modification.


I agree that increased parallelization of common computing tasks could increase performance, but I believe that performance is acceptable (by the consumers and management) for most applications.

Programming games is fringe programming. I believe that most PCs are used for: CRUD web apps, VB/C# apps, productivity apps, document editors, ecommerce clients, collaboration, and communication clients.

Sure, faster programs would be great! I want it as much as the next person. Sadly, I don't see it being the Next Big Thing. I hope I'm wrong! I do think that the present high-performance community is going to get nicer tooling for parallel programming, and some of that may find its way into your everyday consumer apps but that will be in forms like Grand Central Dispatch or other library adjuncts to existing platforms, and therefore not a technological leap but rather a step in the right direction.

Polling files for modification is an example of not taking advantage of platform capability, where existing performance is "good enough." Most platforms now have filesystem hooks for modification, and if you want to improve the speed of gedit you can register an event handler.


Agree. In a few years, we'll all have N cores on our desktops.


This is something I've been looking into and I'm becoming convinced that given the wide range of (expert level competency in) domains required (memory models, processor architectures, concurrent data structures) to effectively address the issues, it is likely that after a short period of language/software layer solutions, the issues will be addressed at the hardware level. Its an intrinsically hard problem that I doubt can ever be made practical for the general (IT level) programmer community.


It's endlessly hardwareable.

What I mean is, every algorithm or solution designed with software can always, by definition be implemented in hardware, bringing vast improvements in performance and scalability. This will always happen as long as hardware continues to get faster and there is no reason to believe that it won't continue to get faster exponentially for the foreseeable future. Quantum computing, 3D processors, light based data storage, carbon nanotubes, dna circuits...

Soon, we will have Databases and Web Servers implemented in hardware... When graphics was the rage, Intel put MMX in the CPU. Now we have two CPUs in the CPU, or four. We put SSL in hardware to speed that up. We can and do put it all in hardware eventually.

There are already lots of instant hardware solutions you can burn with EEPROM, PLAs, and the like. I don't see any reason Intel or AMD won't let consumers upload programs to burn right into the silicon. It'll be like embroidering polo shirts.


Hardware tends toward being general-purpose over time. It only makes sense to do dedicated hardware for things that aren't feasible to do using general-purpose silicon at the time (notable exceptions are for simpler things like ethernet controller chips, although if a general-purpose CPU is cheap enough to replace the chip, maybe that will change). As you pointed out, hardware speeds do get exponentially faster over time, which makes it less desirable to go to the effort of designing purpose-specific hardware.

The problem with current FPGA and CPLD solutions where you would be able to upload your own algorithms is that currently they operate much more slowly than mask manufactured ASICs. Not to say that couldn't change at some point, but even then the line between software and hardware is pretty blurry. Tools for designing correct implementations of an algorithm in hardware that take full advantage of being in hardware (very fine-grained concurrency, etc) are difficult to master.

Sorry, it is hard to hit these points in depth in a short post, but hardware has a set of concerns that are separate from software, and treating them as exactly equivalent is not really accurate, even if functionally and algorithmically they are the same.


You are right about the tools being difficult to master. I've been doing some work with FPGAs lately, and I see one big problem. The hardware is really cool. But the software is horrible.

I'm working with Altera tools. To do any work, I need to download about 5Gb of stuff. The IDE is painfully slow, badly designed, and crashes about once every hour. Everything is closed-source. Basically, the development process is at least a decade behind that for software work.

Xilinx tools are also closed (don't know if they are as unusable though). I think that if one of the FPGA companies opened up their toolkit, they could win big. But they are too worried about their proprietary routing algorithms to do that.


don't know if they are as unusable though

Pretty much.


Of course, that's a given. This whole business with Ln caches and cache-line misses is a sort of déjà vu in scale (and now people are talking about MVCC and transactions at the memory level ;)

So yes, any software system can be embodied in hardware, but when a problem is so fiendishly difficult and optimal solutions require (sophisticated) algorithms matched to hardware specifics, then the cost equation tips heavily in favor of hardware embodiment. A significant barrier to the n-core future is pedagogical and I'm guessing that will further tip the balance towards a systemic solution embodied in hardware to hide the concurrency issues.


Well, that will be okay. Do you want to spend your time building the game or writing ray tracing algorithms? You want to call Array.sort(), not write a sort. It just takes up time.

It's interesting that CISC won out over RISC considering the conversation we are having. It's simpler for the builder. You can repeat yourself less. Imagine doing a quicksort in assembly. No way, right? Not today. Why?


CISC won over RISC? I think you meant that in reverse :)


Just look at the i86/x64 instruction sets + extensions. It's rather large.


x86 is king. a terrible, ugly and far past its prime king, but king none-the-less.

Edit: The replies below are insightful, I should not have been so reductionist.


And beneath that CISC instruction set is a RISC core and a translation layer. The fact that the instruction set is CISC has more to do with backwards compatibility (hello, Windows) than an actual advantage of CISC.


It also has to do with CISC code being more compact. With memory getting slower and slower relative to the CPU, this is increasingly an (overlooked) advantage. Maybe at some point CPUs will do automatic code decompression in hardware.


Until you look at anything other than PCs. I'm reasonably sure there are more ARM chips in operation than x86, and ARM is (if I recollect correctly) a RISC processor.


It's an intrinsically hard problem, but one that can't necessarily be solved at the hardware level.

For a large number of processors on the right tasks, just getting instruction level parallelism (e.g.) is not even tiny bit as good as using an optimized algorithm.


I understand and agree with you, but that is by definition accessible only to a very specialized expert group.

The differentiation aspect of hardware is that it involves both logical and physical domains, where as software is limited to logic. We may require new approaches to hardware and architectures as the "intrinsic" aspect of the problem is very much rooted in underlying hardware architecture.

(I also wouldn't be surprised at all if the initial successful applications of quantum computing are memory related.)


I think the general (IT level) programmer community doesn't require more processing power to do their jobs than they would have had on a single core desktop in 1999, anyhow.


It may be true that programmers don't strictly require the power, but interesting things become possible when you can afford to waste a lot of power. This should be evident by looking at Rails etc. In another 10 years who knows what will be possible by wasting another 100x processing power?


I don't think you could have comfortably done Rails dev on the typical 1999 workstation.


Said like a true C programmer.


HTML 5. In two or three years time I'm willing to bet a large portion (if not a majority) of desktop applications on all platforms will be written using HTML 5 technologies - in particular offline storage and web workers. If you're an expert with CSS, JavaScript and the various HTML 5 APIs you'll be in a very good position.


The barrier to learning HTML 5 for the entrenched HTML/CSS/JS talent is too low and the migration path is too clear that being an "HTML5" expert will be notable enough to provide exclusive niche marketability.


... and if I'm wrong, those skills will still be useful.


AMQP/RabbitMQ. I explained in more detail why in this post:

http://news.ycombinator.com/item?id=1006208


A couple things off the top of my head: -Web apps geared towards the emerging thin-OS netbook market

-I think Android apps will kick off in a much bigger way this year with more people getting their hands on phones supporting the OS. It's still early enough to get into IMO.

-On this page Veera suggested the Semantic Web, and while I have big hopes for the Semantic Web (I should, I blog[ged] about it), it's just not going to mature into what I think you're looking for in the amount of time you're looking for.

-Tichy suggested Data Mining, which I'm getting very into as of late, so maybe I'm not in the best position to give an impartial opinion but I think that will be taking off more. :)

Edit: fixed linebreaks


There is no evidence at all that an alternative OS is going to hit off in any significant way. Netbooks are popular cos they run Windows XP. I expect it to stay that way.

Android is a gamble.

Data Mining will make a lot of money for the person who can explain his data mining app in one sentence without ever using the words data or mining.


So what would you recommend be the first step in learning about Data mining? Any resources/books you can suggest?

[I figured since you just got into it ... ]


Learn some linear algebra. Literally every how'd-they-do-that-technique from OCR to PageRank is based on putting data in to a matrix and extracting some eigenvectors.

I took LA in college, but my professor was no where near as good and on topic in relation to computer algorithms as this guy: http://ocw.mit.edu/OcwWeb/Mathematics/18-06Spring-2005/Cours...

Oh, check out R. There are a lot of free papers and example code showing say, how principal component analysis on a dataset of migratory geese can be used to explain so and so, along with generating some pretty pictures.

It's a really nice bit of OSS!


"Programming Collective Intelligence: Building Smart Web 2.0 Applications" is worth checking out in Borders to get a sense if it's gobbledegook amazon link: http://bit.ly/5JlRz7


I may have mislead you. I occasionally refer to information extraction (through nlp) as data mining. It's a bad habit, but data mining just sounds cooler.


Data Mining


+1 It may not be sexy or glamorous but it will be in high demand.


Any justifications for that? You could have said the same thing 10 or 20 years ago, but it still hasn't come into great demand.


Webapps. The rise of the webapps will herald a new era for analytics and reporting. There's gold in them thar logs!


Yes, I'm starting to work for an online company that's sitting on 20 TB of clickstream data that no one's gotten around to even looking at.


10 or 20 years ago there wasn't nearly as much readily available data to be mined. Today even moderately high traffic sites generate GBs of log files a day, not to mention the enormous quantity of high value data available through various APIs.


you don't actually need all of the traffic to make meaningful conclusions. Tracking a statistically sound random sampling of user sessions provides most of the benefit for pattern analysis uses.


you've actually got the processing power to do interesting things. I've currently got a 250M record database in my domain of interest - a few years ago, crunching on this database was prohibitively time expensive but now it flies, without even getting into what it means to be able to run stuff on EC2 with arbitrary power... that's direct experience with the same database btw, not supposition based on two different databases. Next, consider how much more data is being generated... it should not be difficult to believe that drawing interesting conclusions from data is and will continue to be interesting.


and visualization


Any thoughts on the best resources for learning data mining ie. can anyone suggest must read sites, blogs, textbooks on the subject? Thanks.


first, you need to grok the basics http://academicearth.org/courses/machine-learning but data mining can't be learned from a passive standpoint. You need to find a large dataset http://aws.amazon.com/publicdatasets/ and try to do something with it. I've been pleased with http://neuroph.sourceforge.net/ for a lot of the stuff I do.


Data mining is a lot of statistical work and machine learning, so if your stats knowledge is rusty, basic or non-existent, then I would suggest you read up on that.


SCPD at Stanford offers a certificate program in data mining

http://scpd.stanford.edu/public/category/courseCategoryCerti...

(if the above is broken, http://tinyurl.com/stanford-graduate-certs )


Ben Fry's dissertation 'Computational Information Design' [http://benfry.com/phd/] is a great start. He breaks down many of the skills needed to succeed at information design and processing.

He's recently released Mastering Data with O'Reilly, which is essentially an expanded second addition to his dissertation.


I think you mean Visualizing Data and not Mastering Data.


I'd recommend you to start with data visualization, intute how mean, median, variation don't capture the data, then move onto normal distribution approximation of dataset, then linear regression and finally bayesian classification/probability models.

Only after you get a good grasp on basics should you move to advanced topics such as neural networks, SVMs, etc.


Coding for GPGPU or stream processors


i'm doing this now, and it's a heap of fun (if you like fiddly coding challenges :o). but it's hard to see how this or the next generation will be the next big thing. it's still necessary to have a "suitable" problem, and the next generation (nvidia's fermi), while improving things enormously, is still very much not a general purpose chip (i was just looking and the cache for a multiprocessor is going to be 64kB - compare that to the 6MB on my core 2 quad...)


The cache may be small, but unless you're talking about something different - graphics memory is fast, and GPU clock speeds are moderate, so cache isn't as critical on a GPU as on a CPU.


The trouble is that you have hundreds more processors. So even if the memory is twice as fast, and each processor twice as slow, memory access is still the dominating factor in efficiency.

You can work round that by being vary careful, arranging things so that processors access memory in sequence, which lets reads be coallesced (you're streaming data from continguous addresses, avoiding the "seek time" of random access). But that only works if all the processors are focussed on the same job.

Now you can say I'm just describing the standard problems with GPU, and I'd agree, but my point is that even in Fermi (which is a huge step forwards in many ways) these will still dominate. And it's hard to see how most software fits into such an approach. Hence my warning that they are not becoming general purpose.


I upvoted you, but I am a bit biased ;)

Seriously though, if anyone is interested in learning about this, the best place to start is definitely the nVidia GPU Computing forums: http://forums.nvidia.com/index.php?showforum=62


Heck yeah! Thousands of cores, each running hundreds or thousands of threads. I was just checking out NVidia's Tesla cards the other day, three of those and you have a full-blown desktop supercomputer priced under $5k. The challenge is going to be thinking up interesting/useful stuff to write for it.


The new technology that interests you.

Seriously, several of the technologies emerging now are likely to be big (or at least grow substantially) over the next few years. But if you focus on the one you find most interesting it will help keep you focused and motivated which can be an enormous help. It will also help you enjoy what you are doing which simply makes the process more pleasant.


I think that the other commenters here are on to something with Data Mining, but they are not seeing it right. What is really needed is more than Data Mining but more of Data Abstraction. I.e, there is a LOT of different data out there. People have very different needs for it. We cannot know all the possible usecases for this data. This information has to be abstracted and simplified so that data mining apps become trivially easy for business types to write. That is a good area to be in.

If you know enough about dealing with data such that you can be one of the ones building layers on top of the data, then there is a lot of opportunity there.

5 years is a reasonable estimation for this. Before then may be a bit tough.

Another important thing to learn are Location Based Services. The problem is that the applications at the moment have not all been invented yet, and it's not clear when it will reall trickle down to the Ex-VB6 guys.


The term "Data Abstraction" already has a different meaning. The "simplification" of data implies an understanding of the data, and that modification instills biases in the representation that may limit the fruitfulness of the data itself.

Letting "business types" write data mining apps is already on the market and has been for a while. Look into "Business Intelligence" applications. The verdict is that a) making it easy to do BI actually makes it very complex and b) making analysis easy does not automatically expose or explain what analysis is salient.


Hardware hacking and DIY manufacturing/fabrication. Mainstream manufacturing has been offshored, but there are a lot of people working to build a DIY ecosystem that will eventually allow people to turn around low volume, niche products faster than offshore manufacturers.


IMHO it's more about concepts than technologies - eg, MVC based Web frameworks were a good concept to pick up a few years ago, whether you chose Rails or Django or Zend Framework etc, OOP were a good idea to pick up somewhere in the 90s whether you ended up writing games in C++ or banking apps in Java.

I think one of the next concepts to take off is asynchronous (or event based) programming. It's been around for a while, but only recently (with growing interest in concurrency) becoming mainstream for non-UI, dynamic language apps. Good intro here: http://simonwillison.net/2009/Nov/23/node/


I am surprised no one raised the issue of smart-devices. I mean the hundreds of planned smart, wireless, SOC devices that are prophesied to become deeply ingrained into our lives in the coming years.

ZigBee/WBAN/RFID and the technology around them.

This fields has myriads of required applications: - Firmware upgrades - Control & Monitoring - Inter communication tools - Security and more and more and more


I just blogged on my predictions for the hot tech for 2010: wireless, analytics, modeling, data/text mining, micro business development with small very focused charge for use web apps, Linked Data, etc.

The thing is, choose something that is fascinating for you, otherwise you probably don't have as much chance for success. I would suggest going with what most interests you.


Sales.


How about the brain?

How the Brain Encodes Memories at a Cellular Level http://www.sciencedaily.com/releases/2009/12/091223125125.ht...

If you've ever read Mindkiller or Time Pressure by Spider Robinson, then you know where I'm going with this ;-)


How about iPhone development? Being early in a gold rush has its advantages, but there's also advantages to coming in later.

Or Flash? I know it's not loved by very many, but expert Flash developers can demand a very good salary. They're also very well positioned to develop cool stuff on new web platforms.


And, speaking as an Actionscript hacker, the language is very good to work with. It takes all the best aspects of Javascript, gives them some (largely optional) Java-like safety/structure features, and puts them in a much more sane API environment.


The language is pretty cool, too bad the VM and compiler are terrible.


Ha, I think the language is pretty mediocre (making JS superficially enterprisey is not a good idea), but the VM itself is pretty decent, especially on Windows.

The real problem with the Flash implementation is in the native runtime, not the VM. It just struck me that Flash is a ripe candidate for a Smalltalk-style turtles-all-the-way-down runtime implementation!


Scala! ... specifically using Actors for concurrency problems has been a bit of a revelation for me ... and the functional parts if you're not already familiar with FP

... and for practical reasons ... Scala is one of the only academic-ish languages you can actually use in the real world


Data. R/Hadoop/Hive/Pig/Dryad, SQL Server streaming, Excel PowerPivot and the works.


Mobile Devices - not just the iPhone but think "What would software/hardware be like if you could carry your desktop machine everywhere - inside your pocket"


Mobile Devices is a good and safe bet. Particularly cross-platform stuff. Fragmentation is not going to occur, everyone knows what is at stake and nobody wants it to happen.


a) Every kind of concurrency stuff. CUDA, threading, Go/Erlang/... light threads, and so forth.

b) Alternative databases.

c) Real-world scalability.


Omap3 and Omap4


Semantic web


2002 called, they want your comment back. No seriously, semantic web has been talked about for a long time. People don't understand it, don't care and don't see how it will make them money.


Google is starting to look at microformat data. All that needs to happen in order for more semantic tech to take off is the search engines paying attention.

As soon as a technology translates into an edge on the web, it'll get implemented. This fixes the monetary motivation problem, and forces people to solve the other two :)


Be that as it may, the Semantic Web community and the data it requires in order to exist are still growing, much faster than when I started blogging about it in January 07.

People are starting to care but the "adoption rate" is so low because, as you said, people don't understand it. I still struggle with the subject. Not the underlying concepts, but the picture as a whole.

From an "insider's perspective" I can say with confidence that what we hope the Semantic Web will bring us is going to come about. I just personally no longer believe in "well, here is the old Web, and then here's the day we switch on the Semantic Web. See the difference?"


A subset of the SW, Linked Data, seems to be gathering some momentum. I still think that we are going to hit an inflection point where there is enough linked data to enable great apps, which will drive more people to publish linked data. In this case, the uptake could accelerate rapidly. OTOH, you might be right. I like RDF+RDFS but OWL, while interesting, seems very heavy weight to me.


I worked at one of the bigger semweb startups for a few years, been to the major conferences several times... the whole concept of putting structure to data makes a lot of sense. However, an explicit ontology is not necessary to make data useful. RDF is a great technology that exploits the generality of graphs, but I'm not sure if and when it's going to catch on. I'd still rather use JSON for most of my transport layer stuff.


statistics/data mining/web analytics. Maybe iTablet development




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: