Sony (and probably the other camera makers) deserve to go bankrupt. They've had years to work on digital cameras and yet they still don't have a clue what software is about.
Case in point, how well iPhone takes pictures. Sure the sensor is tiny so at some level the quality of input has to be lower but so much software is being applied to make take the data from the sensor and make a great picture.
There's no reason any digital camera manufacture couldn't have thought of these ideas. In fact they do add new processing every year but for some reason they're not thinking outside the box.
Another area is sound. Go to a concert or a dance club where the sound is loud. Pull out any digital camera and record 30 seconds or so. Then pull out your iPhone and do the same. Now take take them home, copy to your computer and listen. The iPhone's audio will be an order of magnitude clearer. The digital camera's will be distorted beyond listenable.
As far as I know most other phones have the same issues in the audio area. Many of them do at least apply a large amount of processing to picture quality.
Theoretically the larger sensors of a digital camera will away be better than the small sensors of a cell phone. But that's only theory. New technologies like meta materials suggest that in just a few years it may be possible to make the small sensors and even the tiny lenses as good or better than today large sensors and large lenses. Sure, arguably large sensors and large lenses using the same new tech would be better but if your phone took as good a pictures as a Nikon D800 or a Canon 5D Mark III would most people want more? Most people already make due with their phone. I see so many people taking all their travel photos on their phone. Sure today it kind of sucks with no zoom or whatever but that's coming one way or another.
Yet another direction that's coming is multi−image processing. Basically, even with a low−res sensor if you take a lot of pictures of the same thing you can image process that (software) into a high res image. The problem then is movement, everything in the image has to be still, but that assumes that the sensors aren't super fast. New materials could enable 1/10000th or 1/100000th second images. At 1/100000th second you could take 1000 images in 1/1000th of a second effectively freezing the image. Apply crazy image processing and you can get a gigapixel image out. Now crop it to any zoom level you want and still have enough resolution for all current uses.
There are even techniques to image process depth information making it possible to change the depth of field after the fact and change the perspective. No more need for all those different kinds of lenses when you can simulate them all and still get something better than today's best quality.
Of course some of this tech is 10−20 years away. Processor speed has to continue to double every 18 months or get massively more parallel before you can process that much data in a reasonable amount of time but it's clear it's coming.