Visual Studio LightSwich is a new tool to create data-centric Silverlight applications as I’ve already written about in a previous blog post.
Currently there’s a lot of discussion going on about this tool, in regard to “non-programmers”, “non-thinkers”, “Microsoft doesn’t get it”…
For myself I love writing code, but “the best code is the one you don’t write”.
I’ve written and loved writing Assembler code. I’ve done C when C was discussed as too much “high-level”. Then C++ had too much overhead for many contrary to C, reasoning object-orientation is not needed. Nowadays trusting the garbage collector from .NET has become normal. We write XAML and C# code. Of course there are still scenarios where Assembler and C has its places.
It was really fun to teach a one-week ATL course where after the week major issues to discuss have been templates, dual, dispatch, custom interfaces, different apartment models like STA, MTA, TNA, and the best we could get after a week probably was calling very simple components across the network. Nowadays in a course week it’s possible to create fancy UIs calling services across a network, using complex databases… I wouldn’t want to write this with C or pure assembler code.
Abstraction goes on and on and on.
Visual Studio LightSwitch allows doing fancy UIs calling services across the network, using complex databases… without a single line of code. Probably in reality several lines of code are still required as there’s a C# and a Visual Basic template for LightSwitch inside Visual Studio, but the code lines will be reduced.
Visual Studio LightSwitch is not the first tool to offer such functionality. Several others failed doing so. What’s different now?
Reason other tools failed were e.g. scalability. Some tools could be used in a single-user scenario but failed as soon as multiple users were accessing the same database.
Visual Studio LightSwitch is based on .NET technologies, probably uses stateless services, well-known patterns to reach scalability. Deploying to Windows Azure just by changing configuration values the number of systems answering the client requests can be simple increased. Probably manual code can be written much more efficient in that half of the number of systems could get the same overall performance compared to the code that is generated, but it might be cheaper to not write the code. In regard to scalability with Visual Studio LightSwitch I’m seeing more issues in defining scalable databases.
As Visual Studio LightSwitch is based on .NET technologies, I expect enough extension possibilities where the extensions are done by “real developers” to have this technology for a broad range of scenarios.
Of course every tool can and will be misused. For example, when a city changes the postal code which means changing thousands of customers, would it be a good way to use a DataReader to read every one of these customers, and to an UPDATE statement for each one? Or using the ADO.NET Entity Framework to read all the customers into the data context, changing the postal code, and doing a SaveChanges to the database? Or just do a single SQL UPDATE statement instead? Every tool can and will be used in scenarios it isn’t meant to do. I’m not expecting Visual Studio LightSwitch as the tool to do every data-centric application.
One big issue I’ve seen many times with tools that allows much of the functionality by a designer is versioning. If creating a big solution with this tool, how can it be migrated to the next version? Will this be seamless by just opening the solution with a new version? Does it require to redo a lot again? What if many third-party components are used? Does it require to wait until the 3rd party components are moved forward to the next version before migrating?
This needs to be seen how it works out with Visual Studio LightSwitch. Probably it’s best not starting with a big solution but with smaller scenarios. What extensions are required, what’s already in the box?