Should you split your ASP.NET MVC project into multiple projects?
“Should I split my ASP.NET MVC project into multiple projects?” That’s a question that I get a lot! Almost every week! The short answer is: NO!
I’m not entirely sure how this trend started but I’ve seen some developers split an ASP.NET MVC project into multiple projects: a web project containing the presentation logic, plus two additional class libraries, often named [MyProject].BLL and [MyProject].DLL.
Also, it’s often incorrectly assumed that this structure makes an application multi-tier or 3-tier. What is a multi-tier application? It’s an application whose parts are physically distributed to different computers in a network. Web applications are often inherently multi-tired. In a web application we often have the following tiers:
- Client/Presentation Tier: That’s the piece running inside the user’s browser.
- Middle/Application/Logic Tier: That’s the part built with ASP.NET MVC (or other similar server-side frameworks) running in a web server.
- Data Tier: That’s the database, file system or any other kind of storage.
When we’re talking about ASP.NET MVC, we’re only talking about the application or middle tier. Separating an ASP.NET MVC project into three projects does not result in addition of new tiers in your architecture. You don’t deploy the DAL class library to a different computer! Most of the time (if not always) all these 3 projects (Web, BLL and DAL) are compiled and deployed in the same process on one machine; that is your web server. So, when someone visits your web site, these three DLLs are loaded inside a process (or more accurately an AppDomain) managed by IIS.
Layers vs Tiers
Layers and tiers are used interchangeably by some but they are fundamentally different. Layers are about logical separation, tiers are about physical separation: distributing pieces of a software application to different computers.
Layer is something conceptual in a developer’s head. A class library is not a layer, neither is a folder. You can put classes in a folder or a class library that belong to different layers and be dependent upon each other. This is a sign of bad architecture and coupling. Putting these classes under a folder or a class library like BLL and DAL does not immediately result in software with clean architecture and good separation of concerns.
Despite that, my argument is that these folders (BLL and DAL) can and should reside in the main web project and moving them into a separate class library does not add any values. It doesn’t magically create layers in your applications.
There are 2 cases for splitting a project into smaller projects: reusability and independently deploying those projects.
One reason for separating a project into multiple class libraries is re-usability. I’ve yet to see the BLL or DAL part of a web application re-used in another application. This is what text books from 90s used to tell us! But most if not all modern applications are too specific and even in the same enterprise I’ve never seen the same BLL or DAL parts re-used across multiple applications. Most of the time what you have in those class libraries is purely to serve what the user sees in that particular application, and it’s not something that can be easily re-used (if at all).
Another reason for separating a project into multiple class libraries is about deployability. If you want to independently version and deploy these pieces, it does makes sense to go down this path. But this is often a use case for frameworks not enterprise applications. Entity Framework is a good example. It’s composed of multiple assemblies each focusing on different areas of functionality. We have one core assembly which includes the main artefacts, we have another assembly for talking to a SQL Server database, another one for SQLite and so on. With this modular architecture, we can reference and download only the parts that we need.
Imagine if Entity Framework was only one assembly! It would be one gigantic assembly with lots of code that we won’t need. Also, every time the support team added a new feature or fixed a bug, the entire monolithic assembly would have to be compiled and deployed. This would make this assembly very fragile. If we’re using Entity Framework on top of SQL Server, why should an upgrade because of a bug fix for SQLite impact our application? It shouldn’t! That’s why it’s designed in a modular way.
In most web applications out there, we version and deploy all these assemblies (Web, BLL and DAL) together. So, separating a project into 3 projects does not add any values.
Use Cases for Physical Separation
So, when do you actually need to physically separate a project into multiple projects? Here are a couple of scenarios:
1- Multiple presentation layers: Let’s say you’ve built an order processing application. This application is a desktop application used by staff at your organization. You decide to build a web interface for this application so the staff can access it remotely. You want to re-use the existing business logic and data access components. As I explained earlier, one reason for physical separation is re-usability. So, in this case, you need to physically separate this project into three projects:
- OrderProcessing.Core (contains both the BLL and DAL)
Note that even here I don’t have two projects (BLL and DAL). I have one project, OrderProcessing.Core, that encapsulates both the business and data access logic for our order processing application.
So, why didn’t I separate this project into two separate projects (BLL and DAL)? Because the whole purpose of this DAL is to provide persistence for what we have in BLL. It’s very unlikely that it’ll be used on its own in another project.
Also, following the dependency inversion principle of object-oriented design, the dependency should be from DAL to BLL, not the other way around. So, this means, everywhere you reference the DAL assembly, you should also reference the BLL assembly. In other words, they’re highly cohesive and inseparable. When you separate things that are cohesive, you run into issues later down the track.
2- Multiple applications under a single portal: Another use case that one of the readers suggested is where you have multiple small applications that are hosted in a single portal. So, from the end user’s point of view these applications are not separate; they are all different domains of the same application. But from development point of view, each application is independent from the others. Each application can have its own persistence store; one can use Excel, another can use SQL Server, and the other can use Oracle.
In this scenario, it’s likely that these applications are developed by different developers/teams. They’re often independently developed, version and deployed, hence the second reason for physical separation.
For this scenario, we could have a solution with the following projects:
- OrderProcessing.Core (a class library)
- MainPortal (an ASP.NET MVC project)
Once again, you don’t see the BLL/DAL separation here. Each class library (eg OrderProcessing.Core) includes both the business and data access logic for its own domain.
The Bottom Line
Here are a few things I hope you take away from this article:
- Layers are not tiers.
- Tiers are about physical distribution of software on different computers.
- Layers are conceptual. They don’t have a physical representation in code. Having a folder or an assembly called BLL or DAL doesn’t mean you have properly layered your application, neither does it mean you have improved maintainability.
- Maintainability is about clean code, small methods, small classes each having a single responsibility and limited coupling between these classes. Splitting a project with with fat classes and fat methods into BLL/DAL projects doesn’t improve the maintainability of your software.
- Assemblies are units of versioning and deployment.
- Split a project into multiple projects if you want to re-use certain parts of that in other projects, or if you want to independently version and deploy each project.
As always, keep it simple!
If you enjoyed this post, please share it with your friends.
I think splitting up your app into different layers is the way to go because it makes your application especially mvc apps more testable & maintainable.
Because what happens otherwise is that your controller methods become huge and if you are using EF then its code also leaks into your controllers.
The end product is a mix of business logic + database access calls which is not all maintainable or testable.
Also if you are using a pattern like Unit of work to handle transactions then it’s going to be a mess in your controller.
Instead of that if you split your app into repository+core then your EF logic can be constrained in your repository and actual business rules in your core with controllers just calling those methods and in that process your controller methods remain clean & lean.
Please note the title of the post. I talked about “physical separation” of a project into multiple class libraries. Nowhere I argued against separation of concerns, having proper namespaces and classes each focusing on a single responsibility. What I discussed is that all these namespaces (eg Repositories, Services, etc) can be and should be part of the main project and moving them into separate class libraries does not add any values.
The title is “Should I split my ASP.NET MVC project into multiple projects?” – the word “physical” is not in there so hence the confusion from many readers. If it were not for your comment/clarification above I would still think your post was an opposition to “separation of concerns”.
The word “physical” is implicit in “multiple projects”.
I disagree on that statement. I can create a single project for my MVC Web layer and a seperate project for only by BLL and DLL. Still nog “physical”.
Yes you should!
What do you think about abpframework, I heard of this from a coworker but after trying to make it to work, I find it more than a nuissance than a helping tool.
I haven’t used it so I can’t comment!
I don’t have many experience, but what I’ve see is that when make multiple tiers is to run the application in multiple servers. So I agree, separating the main project in multiple ones don’t add any value.
Maybe the term n-tier architecture means different things to different people, but majority consensus is that it implies separation of presentation, logic and data layers. Whether or not they actually cross process boundaries is another matter. In the early days of n-tier (1990s) the industry was happy enough to simply separate these into separate projects. Without separating into multiple compiled components, in a project with over a million lines of code, it would be a nightmare because a single checked in compilation error in one area of the project (easy to do on a huge team) would break the build for everyone. To some extent continuous integration techniques help with this, but not completely.
But the case for re-usability is perhaps the most important issue here. Our ASPNet MVC project does have a business logic layer and a data access layer. Why? Well because it started out as an ASP.Net webforms project, but then we had WCF services that need to call the same business logic. Then we needed Winforms tools that performed administrative tasks, and needed to be sure they were running exactly the same business logic. Later some of this stuff became MVC and WPF and WebAPI, and guess what? It is hard to tell a company they need to throw away tens of millions of dollars of development money out the window, just so that they can rewrite functionality that’s already working well enough. If we had even tried to do that, the project would be dead and about 23 .net developers and many QA staff, dbas etc. would have been laid off and looking for a new job!
Point is, separating projects is the pinnacle of reusability. Yes, MVC provides a nice separation of concerns and by some definition in inherently n-tier, but whether you call it n-tier or not…..
…. modularity, in software design, is NEVER a bad thing.
I totally agree with you @L. Faranetti. Separation aids maintainability and you could easily work on or fix a bug in a separate project and deploy it without necessarily redeploying the entire web application
I’m keen to know how many web applications you’ve seen whose parts get independently deployed! And I’m not talking about apps with micro services architecture. Just plan old ASP.NET MVC apps!
To answer this, I will consider two types of projects,
In the enterprise, where one guy is making relatively small apps (those intended to be short life-span projects) inside an IT department, separating business logic into multiple projects may be overkill. In terms of quantity of how many projects I’ve seen in this category, I’d say over the last 25+ yrs I’ve seen a high number of small internal websites that have one monolithic web app with no plan for separating to additional platforms or re-usability. This type of app is small enough that it’s no big deal to tear it down and rewrite it in a new language or methodology when a new shinier fad comes along.
I’ve worked on no less than 50 of this type of application through my consulting business over the years, in some cases writing myself, or in other cases taking over someone else’s project or working on very small (2-3 people) teams.
In commercial, public facing software it’s an entirely different story. These apps tend to be larger, are public-facing, have an indefinite life-span, often have hundreds of thousands or millions of lines of code, involve large teams, better planning, and some people have their entire career invested in one living breathing app. The quantity of these projects, I’ve worked on is fewer than the above, but the size of the project is much larger and the consequences of making bad choices is much greater and overall there is much more at stake. This type of project cannot afford to get caught in an infinite loop of being rewritten or re-architected every time an alternative language or technology becomes popular.
I’ve worked on maybe 10 of this type of product, but these products are large enough that they are often segmented into multiple applications. An example of this was the app I talked about above, which must have shared business logic between a website, web services, fat client with limited network connectivity, etc.
I think the point you are making with your question is that Type1 is more common for ASPNet MVC apps? In that case I would agree, but quantity of apps isn’t everything. The number of one-trick-pony apps written in ASPNet MVC is greater than the huge commercial product apps because many large projects came into existence before ASPNet MVC. Developer technology evolves too fast for businesses sometimes. Many shops are just now starting to move toward ASPNet MVC, yet Microsoft is already saying .Net Core is the future. Type1 projects can keep up with this pace, Type2 projects cannot.
If we measure quantity as the size of the project instead of the number of individual .sln files, I have seen way more code that has business logic embedded in stored procedures! What a horror for MVC purists, right? Yes but some of those systems were rock solid. And in some cases stored procedures were approved and tuned by the database team which was in another building (guys that are not even .Net developers, they specialize in DB tuning and optimization)! And sometimes that database logic is consumed by multiple applications, some of which are not even written on the .Net platform.
I think you can see where I’m going with this. We as developers tend to form our opinions based on our experiences, but none of us can be in all places at all times, therefore we all have much more limited scope than we realize, regardless of years of experience or total number of apps worked on (or even size and complexity of those apps). And we are known for our strong opinions 🙂
Please don’t confuse this with the idea that ALL apps need to be separated into multiple projects. That is not my belief, but if starting a new MVC project today, that would be my personal recommendation. If nothing else, I would be vigilant about making sure database logic doesn’t end up in the controllers, for example at least I would want to see a special folder called “Services” or similar that interacted with the models and performed business logic. It doesn’t solve the problem for larger teams, but at least it helps to give that code an increased possibility of a future life, when ASP.Net MVC is no longer the hot ticket.
Think of it like this, what if the company you’re working for one day decides the business logic or services layer one day needs to be in a different language, like Java, python, ruby, or whatever. Which will be easier to port? Code that is joined at the hip to an ASPNet MVC application, or logic that has no dependency on the web and is already separated into it’s own DLL?
ASP.Net MVC may be the shiny new technology today, but history has shown that will change over time. MVC does a good job at many things, but it is still far from a state of perfection and this means Microsoft will evolve and change. It’s nice to have an easier migration path to the future for larger projects.
Interesting ponderation, on an interesting article. And you are so right that MVC is not the shiny thing anymore, the hype of the moment seems to be Angular, even at the time of your response.
Well I guess if my posts aren’t going to pass through the moderation filter only because you don’t agree with my opinion, I think maybe your course offerings will no longer pass through my purchase filter. Too bad, I was enjoying them.
It’s not cool to censor opinions you don’t agree with. Disagreeing on architectural decisions is a normal and healthy part of software development.
I never censor! Sometimes it takes time until I approve comments.
Ok. Thought you were not allowing my posts and mine alone because so much time had passed and they didn’t appear. Apologies.
Not at all! Sorry it came across the wrong way. It’s just random. Sometimes I check the comments a couple of times a day, sometimes not for a few days!
Solution with MVC and Core projects handles your case perfectly. You have all necessary layers inside the Core project. You can swap implementations as needed. You just don’t have project per layer.
I completely agree on this.
There are many reasons and many scenarios Ive seen why you should almost always separate into different projects/assemblies.
I even forgot about this excellent remark that logic might stay the same, but frontend technologies change. Why would you not want to use that same logic under that UI.
I just havent read a good reason why to not split it. ‘Because it doesnt add any value’ is simply not true. It might look so in the 1st stage of the project, but when it gets bigger you would have wished you splitted it into more projects.
I’d say the opposite: you dont practically win anything by putting everything in 1 project. Having only 1 assembly instead of n doesnt practically matter.
Nice Post, you are really amazing ….. 🙂
Great post Mosh, in my personal opinion I prefer the way you teach the path to develop an MVC App, I have developed in both paths, and splitting the app, in a certain way, just add unnecesary complexity to the project (handle security, interaction between layers and context that the MVC template makes for you), really benefits are unseen.
You touched on a great point that I missed in my article: unnecessary complexity that comes with this physical separation. I remember early days EF didn’t like its DbContext to be in a separate project. I’ve come across other examples that whatever framework we used didn’t like the idea of some of its artefacts being in a separate class library. So we’ve done this physical separation and brought unnecessary complexity for what? Just because it looks nice to have 3 projects clearly telling us what parts belong to what layers? Couldn’t we just use folders in the main project? BLL and DAL!
I agree that it COULD add complexity, especially if separating the business logic is an afterthought (which it shouldn’t be!), but if the strategy is adopted in the early stages, it can be free of this burden.
For example, security. Authentication and authorization should be the responsibility of the app and not the business logic or data layer, so that should not creep into your BLL or DLL. Your controller actions, in combination with ASP.Net Identity claims for example, are responsible for determining who the user is, and whether or not they are authorized to perform that action (and potentially the result they see in the view). Its an application-specific decision to be made.
A BLL or DAL should not attempt to answer those questions. You should be able to just pass the UserID into the BLL, and in the spirit of claims based security, it simply takes your word for it that this is the person executing that action. There is no reason not to, since the BLL and DLL cannot be hacked into on their own. The application handles the act of logging in.
Application does not necessarily mean MVC or similar. There may be a separate Application Layer in your solution that deals with cross cutting concerns such as authentication, sending emails etc. This layer can be represented as a set of service classes. These classes can go into the same Core project that your real applications (MVC, Batch job, Desktop UI etc.) reference. This way, you can reuse these crosscutting logic in referencing applications.
I think it depends on your project, and how scalable it should be, forexample; for SOA architecture, you should separate your business domain in projects and even to different solutions. Correct me, if I am wrong!
Indeed for SOA projects you need to do that for “deployability” requirements. But even there, you may independently version and deploy Sales and Shopping to different nodes, but you wouldn’t have Sales.BLL and Sales.DAL deployed to different nodes!
The target of my article is those standard ASP.NET MVC web apps with “Web, BLL and DAL” project pattern.
Great, that’s completely correct.
Thank you for the response.
Let’s say you have a requirement that’s not really an SOA per se, but you know that some subset of functionality needs to be exposed as a RESTful API, thus is a good candidate for Web API.
Would you put your API controllers in the same project?
Good question! I mentioned two reasons for splitting a project: re-usability and independent deployability. In case of Web APIs, they’re already at the front of your application, and you’re not going to re-use them in another application. So, I’d ask the question: are you going to independently version and deploy this API? Again, in most applications, the entire web application goes through a build process and gets deployed in an automated or manual fashion. I’m not saying this is the case for every project out there. But in most projects, that’s the case.
The whole point of my article is to think twice and ask yourself this question: “Do you really need to split this project into multiple projects? What values do you get from this physical separation in practical terms?” Just because it looks good or some book told you to do so are not good reasons! Remember, assemblies are units of reuse and deployment.
Yes, actually in every project I’ve been involved in during the last 5-6 years, there was a requirement that the API was consumable by native mobile applications, and in some cases published to the customers as a RESTful way to access functionality or data.
This meant it would be a bad idea to create a dependency between the API and the front end code, because the feature set in the API must constantly change, while the API does not have this luxury.
If the web UI (MVC project with front end code) needs a new version with new functionality in the API, it can be progressed to version 2, version 3, etc. Now what if you have mobile (both phone and tablet) apps that depend on the API as well? All of these apps must be deployed to Apple’s app store, Google Play, Windows store simultaneously, and thousands of customers must download the update at the exact same time! Not only that, all these customers who have connected to API version 1 would find their code broken. Clearly not possible, it would put us out of business, it would break things for lots of customers and cause civil disorder.
By keeping the UI and API separate, we can publish a new API V2, V3, V4 while keeping the old one up and running for some time period to allow existing mobile installations to continue to work, without breaking changes that affect everyone else. Meanwhile, the web UI, which is deployed much more centrally, can move forward to the latest and greatest API (which actually helps with testing the new API before allowing public availability).
This of course is making a case for physical separation on the basis of deployability more than reusability, but they are advantages that always seem to complement each other.
I agree with you that just because a book told us we should do something is not the only reason to do or not to do it, and the same is true for blogs. But, I thought your opinion was that it should never be done? Perhaps I misunderstood your position there.
The one challenge I’ve seen, is a lot of small apps start out without physical separation, because it is assumed to be a small project or prototype, and thus not needed. But then it grows into something bigger, and by the time it is large, it becomes harder to create clear physical separation, because too many dependencies on the business logic and data access have been incorporated into the primary application!
So, there is one school of though that says “never say never”, and that by creating the physical separation at the beginning, you open up much more possibility for future deployment or reusability cases. How many apps can we confidently build where we can say “mobile apps will never need to talk to this app”? Not many that I’ve worked on.
Additionally, even if we never take advantage of the features of physical separation, it gives us good practice for the day we might need to work on extremely large apps in larger commercial software companies (where, in my experience there is always a DLL, BLL etc).
Ahhh, thank you for removing the giant question-mark I’ve had for all these years. I totally agree with you.
I remember the first videos I watched many years ago where the instructor was separating projects and then referencing back to the original project. I remember following along thinking … “how is putting these into separate projects helping me?” It never made sense to me, so thank you for this article!
Sorry my question is in different context.
I subscribed to your couple of courses on udemy. I would like to know your preference or experience on workflow kind of systems. I see many application has different workflows (simple examples are leave application, performance appraisal, standard request processing systems). All these workflow can be design in BPM tools/framework. Like Microsoft has WF and there are other framework/tool available in market like activi.
1) What is your thought while designing these kinds of systems?
2) Have you come across or implemented these systems?
3) Which framework/tool you prefer to designed these workflow systems which can be easily integrate with asp.net application or web api?
4) Whats the future of Mircosoft WF?
5) Advantage of each tool/framework on each other
I am sure, there are lot of people looking for expert advise on above questions. Also, there are not much help available on same topic.
Hope you will find time to address me. Thanks in advance.
First let me define my terms consistent with visual studio terms. A Project contains multiple objects that compile together as a single deliverable possibly as a library or dll. Projects relate to other projects through defined interfaces. A Solution combines multiple projects into a single deliverable for deployment.
So I say ONE solution MANY projects for my MVC solution. My solution is a single web site that provides multiple applications to different user groups at my company. Separate projects provide strict separation of concerns and one solution allows them all to be deployed together.
NOTE the MVC part is just for the UI layer. In the MVC user interface I make use of Areas to keep the controllers, UI models and views for each application separate for easy maintenance.
You asked for a use case where the business layer or data layer would be reused. I have cases where certain tasks can be started manually by users going to the website MVC to perform but it could also be done by a batch process console application. Same business and data layer, two different UI layers.
Just separate your UI from Core (which includes both BLL and DAL). Have batch process or other applications reference Core. Organize your Core just like Areas (group by feature) in MVC.
Your concusion is just plain wrong. It’s only good for hobbist sized projects.
The main advantage of “physical separation” of the layers is to enforce the separation. When everything is in a single project, it’s easy to accidentally add a Dataaccess layer type inside your business layer class, or some type from WPF/ASP.NET.
In one project, it’s not obvious, you don’t get any compile errors when doing so. When you have separate projects, then your business layer should have no references to i.e. EntityFramework, ASP.NET, WPF or UWP.
When you then try to add a type from these unreferenced layers, you get a nice “The type or namespace name ‘IllegaTypeFromOtherLayer’ could not be found (are you missing a using directive or an assembly reference?)” error message thrown on your head by the compiler telling you: “Stop, what you are doing is wrong and violating the separation of conerns and decoupling”
Your main project (web project) still needs a reference to DAL project. A junior programmer can simply add a class that belongs to BLL in the DAL project! Who is there to stop it? I used to think like you but if you want to spend a lot of your focus on adding guards so people don’t make silly mistakes, you’ll end up adding more complexity than the value you get out of them. It would be easier and more valuable to educate people than put guards in the code and project structure, because no matter how strict you are, there are always holes that can be utilized to put things in the wrong place. Think about it!
What would stop it would be the fact that the junior programmer doesn’t even have to have access to the source code of the DAL if we decide that’s too close to the database for him to be changing stuff, until he gains more experience. He can simply reference the compiled assembly in the MVC project, and then the details of the DAL are abstracted out, reducing the complexity and the learning curve.
Usually when something is modified at the DAL level, on a large app it can affect a lot of people. The closer the code is to the actual database, the higher the risk. Imagine a sizeable software company, that has many products, some in .Net, some in Java, some Python, etc. and all of them access a database with 4000 tables. It’s actually kind of dangerous to turn an inexperienced or new employee loose in a project that allows them to modify a single class that could modify the database structure (like code first and migrations! Update-Database. Ooops! Big problem). Code first and migrations are nice features that work great for getting a project kicked off, but once the code base becomes substantial the database needs serious scrutiny before a change that affects the database is checked into the source code repo.
To answer your question another thing that could stop the junior programmer from putting logic where it doesn’t belong is to have the repo set up to provide senior guys alerts when that project is changed. Plus a separate project helps the junior programmer think about the boundaries and what goes where. If all domain models are in a separate project, and only view models are in the MVC project, it helps them easily realize the difference instead of starting to add attributes to the view models and creating other problems.
The techniques that help enable faster for very small projects often are not the same techniques that are ideal for more substantial applications or teams.
Project per layer makes sense when there are different developers per layer. Your description fits this. However, this should not be the default solution organization technique as used today. It has many disadvantages, but only some in special circumstances.
I’m glad I found someone who has the same mindset on the subject as my CTO. I’ve been in fights with him for quite a while on this subject.
In addition to reusability and independent deployability, there is also independent develop-ability. The simplest example I can think of is if I work on small changes, let’s say, in DAL, I don’t want to wait for my BLL classes and MVC/API controllers to recompile every time i make changes only in DAL.
In a case of a single project how would you enforce contracts between layers?
If interfaces and their implementations are defined in the same project, it’s so easy to just instantiate a concrete implementation of the interface and use it directly, hence introducing tight coupling.
Boundaries between layers can be limiting sometimes, but most of the times it’s a manageable drawback, however, having those boundaries have a benefit of enforcing decoupling, which I find very important for a number of reasons, like less regression risks, easier technology swap or update (like moving from ObjectContext to DbContext in EF, for example) and so on.
How would you address these concerns? Maybe I’m old school, I don’t know…
It might be interesting for you to know that despite what I wrote in this article, I used to be a big believer of separating a project into multiple projects until 2 years ago. In fact, in one of my previous jobs, I suggested an architecture for a project that involved let’s say 5 domains, and for each domain I suggested 3 or 4 projects. So that solution would end up with 15 – 20 projects! One of my colleagues questioned me about the value of such splitting and I told him about all the theories of clean architecture and Uncle Bob and blah blah blah! I certainly had reasons for doing so.
But over years, I’ve realized that practice and theory don’t always align. I’d rather be pragmatic than follow all principles out there. I used to be a purist and the more I tried to apply all these various theories in their purest form, the more dead ends I saw ahead of me. Because many of these principles simply do not align with each other. They’re introduced by different people in different contexts at different times.
You argued about enforcing contracts and preventing people from making mistakes (eg instantiating a concrete implementation of a class where they shouldn’t). Read my comment to Tsen’s comment. Now here is my question for you: if you split a project into multiple projects to enforce contracts and apply guards, do you think that junior developer will not be able to make a mistake? He needs to new up a repository implementation and he cannot find it in one of the projects. He simply adds a reference to the other assembly and gets access to that class! As simple as that! Then all your guards are useless! Also, these days, plugins like ReSharper immediately suggest you to add references to other assemblies as you type the name of classes. I haven’t used Visual Studio 2017 but this feature will make it to Visual Studio sooner or later too!
In one of my projects I promoted the use of repository pattern to encapsulate queries. We had complex queries and many of these queries were duplicated in a lot of places. So, I extracted these fat queries into repository methods and this way not only was the code cleaner and easier to read, but if there was a bug in the query, we had to fix it in one place. All good. Now, who was there to stop someone from using our DbContext and duplicate that query again? No one! If that DbContext was in another assembly, they could still add a reference. Let’s say they couldn’t. They could simply create their own DbContext and use to write that query. See where I am going with this? That’s why I’d rather educate my team members than put guards in front of them. That’s what code reviews are for.
Also, easier technology swaps happens usually in text books, especially those in 1990s. I’ve never seen this in the real-world, like ASP.NET WebForms get replaced with ASP.NET MVC like a piece of cake! Most projects built with older technologies, often go through a complete re-write with a new shiny stack that is often radically different from the original stack, not in terms of the implementation, but in “philosophy”. Think of how we build SPAs with Angular or React and backend technologies these days. This looks nothing like when we used ASP.NET MVC to render the views on the server and use jQuery or other libraries to add some client-side functionality. And even that looks nothing like how we used WebForms and all these post backs to refresh a page.
Complete re-writes don’t happen just because of a tightly coupled architecture which doesn’t allow technology swapping. It happens because people also like to start on a fresh canvas, on a green field, without any legacy baggages. They want to revisit the requirements that have been lost and twisted over 5 – 10 years. They want to revisit the UX/UI and use the modern tools and modern ways of building software. ASP.NET Core, Entity Framework Core and Angular 2 are good examples of these. They went through a complete re-write with a new “thinking”. Not just swapping out one component with other. And guess what? I bet they’ll go through another complete re-write in 5 to 10 years because the needs of the world will change and what they’ve architected today may not and most likely will not be applicable 5 – 10 years from now. And take into account that these are frameworks that thousands or tens of thousands of applications are dependent upon. So, these re-writes upset millions of developers but they still happen!
Swapping ObjectContext with DbContext: if you properly encapsulate Entity Framework with your repository interfaces, then all you have to change is your repository implementations. The rest of your application that replies on repository interfaces will remain unaffected. All these artefacts can be in a single project, if they’re going to be deployed altogether (which is the case for most applications as I explained in my article).
If you’ve always had the good fortune of being able to re-write applications when a newer technology comes along, I would say you’ve been blessed! In my career I’ve mostly only seen that luxury afforded on smaller and mostly internal facing projects. On larger projects for commercial software, where there are lots of customers across the country using the product set, lots of developers using multiple technologies, and a million+ lines of code total, chasing down Microsoft’s latest offering every few years is rarely feasible. Developers love the latest and greatest (including myself!) but what we want to do and what we must do for the benefit of the company is often different things.
Think about it, if you have to re-write applications every few years, it would indicate the code being written is not particularly reusable!
I have seen BLL/DLLs written in C# 3.0 that are still going strong, multiple different app types and UIs and APIS have been built on top of them. Reusable libraries just like the .Net framework itself.
Microsoft has been adding to, not rewriting the .Net framework for about 15 years. Existing libraries do not and should not change as often once mature, too many outside customers depend on them. Not everyone is publishing libraries to customers but the concept is the same. Microsoft doesn’t reinvent their own wheel every time a new trendy philosophy comes along. .Net Core is a serious and major change, but it’s being changed because there is a very good reason, they need to eliminate the System.Web assembly among other goals.
Yes, when there is a necessity to rewrite then rewrite.
Some of those DALs I mentioned before were kept, because Entity Framework and LINQ were unable to perform comparably! Of course not, they are only wrappers on top of the bare metal technology underneath. The DALs were more bare metal ADO.Net and still provide the best performance, even if not the latest and greatest.
Again, following my reply to your other comment, are you REALLY going to re-use those BLLs and DALs in other projects? Some would say yes, maybe one day we’ll need that, so better to separate them from now. But this way of thinking often brings extra unnecessary complexity that eventually leads to a “design smell”. It’s exactly like how some developers abuse design patterns for the sake of flexibility.
What I’ve learned from uncle Bob is: “Fool me once, shame on you, fool me twice, shame on me.” That basically means when you see a kind of change in one part of software, refactor it to make it flexible for changes of “that kind”. He taught me not to over engineer and design for hypothetical scenarios that may never happen. Otherwise, all the baggage to support hypothetical scenarios accumulate and eventually leads to design smell.
So, back to the BLL/DAL scenario: are these BLL/DAL components going to be really re-used in other projects in this enterprise, or is it just some hypothetical scenario? Please note that here I’m talking about applications, not frameworks. As I explained in my article, frameworks are and should be designed with modularity in mind. But enterprise applications are different. Even if you have a million users, refactoring the internal structure should not impact your users as long as the application works. But for frameworks, that’s not the case. You cannot constantly change the structure of a framework because there are lots of other applications (not end users) depend upon them.
So, let’s assume these BLL/DAL components are indeed going to be re-used in other applications in the same enterprise. I really doubt that the DAL part would be “independently” re-used in another project, because its main purpose is to support the persistence of what we have in that BLL. So, quite often (if not always), they get re-used together. If they go together, they’re highly cohesive and they should be one deployable unit, one assembly: MyProject.Core. This assembly can be re-used in two different projects: MyProject.Web (built with ASP.NET MVC) and MyProject.Desktop (built with WPF). In this case, MyProject.Core gets reused in two different projects indeed and that’s a perfectly valid scenario.
I hope that better explains my view and resolves any confusions I might have created! 🙂
Well one problem with making the BLL and the DAL the same unit is that on a project of large size, it’s very nice to have the option of changing out your data access mechanism. For example, when ODBC is no longer in fashion and you want RDO (for anyone old enough to remember these ;).. And when ADO.Net is no longer preferred, its easy to swap out with Entity Framework, and if you get bought out by a company that requires nHibernate (that happened to me!) you can easily move to that while only modifying the DAL as long as the BLL never goes deeper than dealing with IEnumerables.
I think you and I have the same Uncle Bob, it’s just that we may be interpreting his words differently 🙂
I agree, don’t over-engineer and don’t make a lot of assumptions about business requirements that may or may not appear.
Examples of over-engineering:
– unneeded complexity, like a lot of properties,methods, models or tables that cannot be directly mapped to requirements
– object hierarchies that try to address “what might happen one day” with requirements
– lots of UML or design docs that cannot be mapped to what is known today about user stories
But, the decision to separate into physical DAL or BLL is an architectural decision that business requirements cannot really help with. To do so early in a project opens up a lot of doors to the future that may or may not be used, but IF they are used, you’ll be thankful to have made the decision. If the separation is needed, but not built in to the project early, it can lead to lots of regrets. We don’t want to tell someone we need to rewrite an app because we don’t write reusable code or that we made some very limiting architectural decisions! At least I would make sure it’s not my decision.
I have worked on large apps where it was assumed “we can separate this later if needed”, but it became an absolute horror because there ended up being things like a dependency on System.Web in a DLL that needed to be deployed in a smart client scenario after the company grew!
So, yes I do agree that we must always be cautious not to over-engineer, but we also need to be careful not to burn bridges behind us as we go!
I really don’t think physical separation adds complexity. If anything, it makes it easier to work on because you always know what goes where, just as in an MVC we know the HTML/Razor is always under the views folder, request handling is in the controllers folder, etc. When working with multiple projects in Visual Studio, the additional projects are really just expanded/collapsed like additional folders, so there is no additional complexity. It encourages putting things in the right place and actually makes it easier to work on, not harder, as the application grows.
My experience has been that most successful applications do, by their very nature, tend to grow into larger applications than originally expected. Nobody can predict this for most apps, so it’s better to create a pathway to future opportunity by creating physical separation.
See also my other response about versioning the API separately. This sort of requirement is becoming more the norm than the exception for most customer-facing software, and even internal enterprise apps, for example when different departments might have the need to access the apps data from a mobile app.
Shouldn’t when you are working alone at the weekend on a small project that will be used by few users. Maybe a POC project. So, small, quick (and dirty), alone are the keywords for one-big-project conception.
If you see possibility growing in usage, you can imagine hundreds of customers, you should think in separated layers in well-separated naming classification (e.g. namespace) which can be a foundation of physical/network level separation later. In team-work, a good approach is the vertical and horizontal task/layer separation, this way is supported by multi-project solution architecture.
Exchangeability and growth. Many projects start on a storage implementation (MSSQL) and finish on another one or more (Oracle+Redis+MongoDb) because of optimization. Entity framework significantly limits the options as many other ORMs as well. Your MVC controller shouldn’t know which storage implementation used in data access, if growing is on the table.
So, your recommendation about project separation highly depends on the project size and type, future scalability and number of associated developers.
I think the problem you are describing can be addressed with microservices architecture. We create microservices per functionality/module/feature rather than layer. We can also create projects per functionality/module/feature to mimic microservices approach. Project per layer organization makes most sense when there are different developers working on each of the layers. I have seen many projects using project per layer approach, but none of the projects had separate developers for BLL and DAL and other invented layers except UI. UI can be put in it’s own project, no issues about that.
Mosh wrote, “I mentioned two reasons for splitting a project: re-usability and independent deployability.”
There is a third reason: improved maintainability.
Whenever you can separate the presentation layer from the business layer and the business layer from the data layer as in the typical 3-tier architecture you provide a structure that encourages the loose coupling and strong cohesion and encapsulation of separate concerns that has proven over and over again to create more robust maintainable software.
And like L. Faranneti as well as my own experience there really are times when you want multiple presentation layers for the same application. And in my case I also want one presentation layer for multiple applications where each application business logic and data logic is in their own projects separated from the MVC controllers and views.
Also some of the applications on this website use an Oracle database, others use SQL server, and others consume or produce Excel sheets. So having separate .dll projects for each data layer is a very useful.
There is real value in having separate project areas. It’s easier to assign different sections to different developers. Having defined interface to the .dll allows independent work and clean divisions of code that can be debugged separately and find errors more easily because they tend to be localized. I can also eliminate an application simply by not referencing the .dll and eliminate just that controller if an application becomes obsolete and in a flash the whole system is cleaned up with no repercussions on anything else in the web site.
And sure, junior programmers can cause damage, but NOT AS EASILY. I have seen junior programmers who went and made every data item global and blew the whole concept of encapsulation out of the water. But if they have so little understanding of variable scope and why it’s important they are going to do even more damage when everything is in the same project space. That’s what code reviews are for.
So for me, it will (ALMOST) always be one Visual Studio solution with multiple projects.
Did you read my comment to L. Faranneti’s comment? The one where I talked about MyProject.Core. I’m going to update the article because I believe it applies to the scenario you mentioned in your comment too.
You have multiple applications as part of a single website. Sure you may want to develop, version and deploy these applications independently. It’s a good candidate for putting each application into a separate class library. But it doesn’t make sense to have two separate projects for each application: Application1.BLL and Application1.DAL. The reason for that is that Application1.DAL is most often (if not always) used to support persistence of Application1.BLL. It’s very unlikely that you’re going to re-use that assembly on its own.
I have to disagree with you on multiple projects improve maintainability and separation of concerns. You have maintainable software when you follow the single responsibility principle at various levels: methods, classes and namespaces. Dependency between these classes should always be from details to abstractions. Whether you have all these classes in one assembly or put them into separate assemblies, it has no impact on the as long as you follow the SOLID principles of object-oriented design. SOLID is about your classes and their associations. It doesn’t deal with physical separation into multiple assemblies.
Assemblies, as I mentioned before, are units of versioning and deployment. You can divide a project into 3 assembles (Web, BLL and DAL) but have really fat classes tightly coupled to each other. So, does physical separation improve the maintainability in this case? Of course not! Once again, physical separation is all about versioning and deployment.
Well I believe what Robert and I are both saying here is that having physical separation in multiple projects is not a substitute for SOLID principles or other practices that lead to good architecture, but SOLID principles do not attempt to provide answers for some of the things we’ve talked about, like deployment, training junior programmers, web service considerations, multiple presentation layers, one app talking to multiple database types, etc.
You mentioned in your article update that there is no need for separate BLL and DAL, because the DAL only provides persistence to the BLL. Well that is not always the case. The original purpose for separation between the DAL and BLL is so that you can very easy change the data access mechanism (I mention this in another post but it’s worth mentioning again). Remember ODBC then DAO, then ADO. Then for a while, ADO.Net was the trend. Right now it is Entity Framework. But will it always be EF? Not likely if we look at Microsoft’s history.
However, I think I understand where your perspective comes from here, because the advent of LINQ and Lambda expressions seem to almost encourage a tight binding between business logic and data (its so convenient that it makes separating them seem like too much extra work). Entity Framework itself is a DAL of sorts, an abstraction of the real database underneath (which is what the DAL was always intended to be). But lets say your application grows to be large, and one day there is a requirement to substitute some other ORM or data access mechanism. If your BLL sticks to business logic, then you only need to modify the DAL.
Back to SOLID for a moment. If we look at everything that’s being encouraged today, whether it is SOLID or specific terms like separation of concerns, dependency injection, repository patterns, or even MVC itself. Do you know what they all have in common? They are designed to accomplish the SAME THING THEY WERE TEACHING US IN THE 80’s IN SOFTWARE ENGINEERING COLLEGE COURSES!! Two primary goals: loose coupling, and high cohesion. Clean architectures have always revolved around those two principles, even before OOP itself became popular (late 80s early 90s). Those two goals are what is driving every aspect of all of these new emerging patterns and techniques.
So keeping that in mind, I think whenever you can decouple at logical tiers (presentation, logic, data), it is almost always preferable, and not much more work as long as you adopt it early in the project. If you wait too long, and the business logic gets tightly coupled with the data access, then usually there is nothing to stop the presentation code from getting tightly coupled to the business logic or the data.
The thing is that you still separate your layers logically (i.e. separate interface for repository, service, model etc.), but keep your types in the same project (Core project). It is very easy to swap implementations. You can even use different data access technologies for different modules (EntityFramework for identity management, Dapper for business transactions, MongoDB for user preferences etc.).
I think Mosh explained it very good. Actually He refreshed our memory in the first lessons of real software engineering.
BLL/DAL/BAL or whatever people use is just a common thing in .NET projects, copied and inherit into new projects. Bad practice.
But anyway, programming is like art. Different style, colors, layers, servers. Just mix it up that fits your purpose and goal. Who cares about rules. And every company has his own convention, regulations. Just like art gallaries, they are dictated by the artist. And I never saw 2 architects with exact the same solution.
I think if you have a project with lets say 300 tables, then it makes sense to split up the entities, repositories, etc by some type of “bounded context” (I use that term loosely) into separate dll projects. Different developers can work on different parts of the application and you can deploy in more of a modular fashion during development or even decide to not develop that “bounded context” of the application for the first version. I like to group like entities together into a separate dll. Plus its about keeping your SANITY when you have a bigger project with over 100 tables.
Well, that’s exactly the second use case for the physical separation which I explained in my article!
I read this article bit late. Nice article.. From my experience, what developers follow without realizing if it is really required…
The 2nd option seems interesting for multi tenant application.
Have separate projects for each customer. This will help during deployment, deploy one solution instead of deploying whole solution for one customer change.
Please let me know if anyone has applied a similar approach in multi tenant solution.
There is also one more cases for splitting a project into smaller projects: readability
When I have all my Logic in one separate project I can read exactly what the application does without any ASP/WEB.. skills.
You can separate your core logic into a Core project and have applications reference it. Use naming convention to identify components that contain pure logic. No need to run back and forth between projects to find out how User Management works in your application. All types related to a single functionality (module) are placed inside a single folder hierarchy. Framework types are grouped in a similar way. Think about System.Text.RegularExpressions or System.Net.Sockets:
I agree with many ideas in this discussion. Especially building layers to prevent stupid developers from doing mistakes. Developers are very smart people. They don’ need our help in that way.
But there is one aspect, I wanted to mention. All these discussions are very technology centric. We are talking about UI Frameworks, Communication Frameworks, Database Servers, etc.
But often we forget one aspect: Why are we implementing this software again? Ooohhh yes, to solve our customers problem. Most of the time customers are not geeks. They don’t care if we are using entity framework, dapper or plain ADO.Net. They neither care of WCF/WebApi/Remoting etc. They just want us to solve their problem in an automated way.
Therefore I suggest everyone to think in solving our customers problem in a technology agnostic way. There are some patterns like Onion-Architecture, Ports-And-Adapters, Hexagonal-Architecture, or whatever you wanna call it.
Basic idea of all this patterns is to think in an object oriented way to represent a model of the world we want to solve a problem in. Lets call this objects our domain classes. For example implement a customer class which has a method to let its adress being changed. this class knows if an adress is valid or not.
Besides this domain layer (please don’t confuse it with DDD; this kind of solution can also be solved without DDD), let us imagine we have another layer called use-case-layer or application-layer. In that layer we could have classes to represent workflows. E.g. one workflow (or command) could be ChangeCustomerAddressByAdmin and one other could be ChangeCustomerAddress.
Both use cases are doing the same: changing the address of a customer. But for example the “…ByAdmin” use-case could additionally send a notification to the customer letting him know about that change. Please notice, I am not saying send an email, or send an sms. I am saying send a notification. At that point it is not important, if we will use MS-Exchange-Server, or Whatsapp-API. Neither I don’t care we are using MS-Sql-Server or Postgre or MongoDb. I just want to know, there is a storage I can read from or write to.
Thinking that way I believe we can concentrate on solving our customers problem in just two projects. Domain and Application. After I implemented the solution and my automated tests are proving me my implementation is calculating correct, I can start thinking about REST or SOAP, RelationDB or DocumentDB, etc.
Please notice neither my intention was to be able to easily deploy something, nor to be able to reuse something. Its just the intention to solve my customers problem.
I am looking forward to read comments about my post.
Separate deployable dlls matters, when you build and deploy, you now, which component to test and patch in the application
You did not mention all the reasons why we we need to have separate projects. You mentioned only re-usability and independent deployment. I think this is very misguiding to passionate people who really follow your articles but again…it is your perspective and it up to the reader who needs to base their architecture decisions based on the business needs and their own context.
Let me ask you the following questions.
1) For creating a unit test project, do you want to reference the whole web project(because business logic is in web project) or would you rather reference the business project (business project is separate from web project)?
2) ASP.Net Web Forms–> ASP.NET.MVC –> ASP.NET CORE………and whats next? Did you think about swap-ability in addition to reuse? I have seen this and done that in an organization where having business logic in separate projects really helped the swap-ability. This is a huge win!
3) Did you think about integration tests targeting web project and unit test projects targeting business project? Long running integration tests can be run separately and maybe once daily and the job that runs integration testing can target only web project Is this a good thing or a bad thing? I think it is a good thing
The job that runs unit testing can target only business project.
4) Did you think in terms of ….WEB API is just a interface for your service/micro service(business logic)…
5) Did you think loose coupling can be applied anywhere from real life… to code ….to organization of your code? The beauty of loose coupling is that..you do not have to worry about how and what your are going to swap or extent but provides the ability for extension/change in a cleaner and less complex way.
6) So far, I have not heard one strong disadvantage in having loose coupling i.e. layering. It will really help if you provide strong disadvantages instead of saying…there are no advantages for layering.
I personally worked in companies where layering helped.
1) I would suggest creating a separate Core project and reference that in a unit test project.
2) Again, use the Core project and you can layer any number of application projects (Web API, Desktop, batch process etc) on top.
3) Same thing, use the Core project.
4) Same thing, use the Core project.
5) Use still use loose coupling and interfaces and dependency injection and whatnot in this type of projects, you just don’t have separate projects for your layers (DAL, BAL, Application Service etc).
6) This article does not argue against layering. It just says that layer is a logical concept and should not be put in it’s own project. No separate project for DAL, BAL, Application Service (not Web API that is deployed but a class that is used by Web API or other application) etc.
Amazing Article , Even experienced architects split the project without any reason . Thanks for the great article.
Amazing Article , Even experienced architects split the project without any reason . Thanks for the great article. One can just use folder structure and use dependency injection for calling from layer to layer
I guess it depends on the kind of application you’re developing.
For example, in my recent work experience for a financial services company, there are multiple applications that use the same logic to generate some information. To us this kind of application is the majority, so it doesn’t make sense to centralize logic into one project but to decouple into as many reusable components you can. To me it’s actually surprising to hear that isn’t common to many of you.
You can reuse the logic, but you don’t reuse DAL and BAL separately. So, you may have one Core project and as many application projects as you like.
I’m working on a project right now where I have multiple projects in the same solution:
Now, I could certainly see the argument for combining the BusinessLayer and DomainModels (the BusinessLayer contains services, the DomainModels are self-explanatory) and DataAccessLayer (since the EF depends on those specific entities).
However, the reason why I at least have the front-end, BLL, and DAL separated is because these three applications need to access them in different ways. The WebApp is the primary Razor Pages .Net Core 2 application. The BatchApp is a console application that will be run on a regular basis to send out notifications and sync with an external data service. The MigrationApp is a console application which migrates data from an old database to a new one and converts between the very different schema (this application is replacing an existing one). In addition to the DAL’s EF DbContext, there is another DbContext that gets utilized by the MigrationApp, with its own set of models (in a separate project). I’m pretty pleased with the dependency structure with how I’ve separated things, and I just like the way it organizes the code and the clear layers of abstraction.
Oh, and there are two unit testing projects as well; one for BatchApp and one for WebApp.
It is key to remember that your article is focused on physical separation. Older systems used a distributed means where today the approach is focused more on API integration and SaaS. Using a Layered Architecture is another concept for code separation of logic. The reason you use this along with say a Repository pattern is do remove coupling.
There are instances as you have pointed out to where you may have a desktop front-end, web portal or just exposing an API for other developers to implement so having the Business (Core) and the Data “Layers” in different projects but the same solution is re-usability, extensibility and scalibility. You referenced Entity Framework as an example, which goes along with a modular design pattern which is great.
An enterprise level application that has many facets benefits from a layered architecture for switching out database systems without modifying the Business (Core) project. This is done with Interfaces and dependency injection.
The point I take from your article is the confusion from “tier” concepts being interchanged with Layered Architecture. They are different.
Logical layering is what matters, not physical layering (project per layer). I have written applications around modules concept. A module is a vertical of functionality just like microservices. It is mostly self sufficient. Can have it’s own dependencies configured and injected. All these modules go into the Core project. They are in different folders inside namespaces that correspond to folder structure. I keep deployable projects (Web Site, Web API, Desktop, batch process etc) separately, though. They all reference the same Core project.
I found your article because I am considering breaking up my single MVC app into multiple projects and I was looking for advice on which axis I should do the seperation. I found it surprising that your answer was no, you shouldn’t break it up.
I agree that you can keep things simpler by keeping it in a single project, and I really wasn’t considering breaking it up for any of the reasons you mentioned. But what is leading me to break it up is the performance of the compiler / linker. I was getting very frustrated with the time it would take to make a change and run the unit test to test that change. With my project, it is now at the point where making a simple 1 line change takes about 90 seconds to rebuild the main project and then another 90 second to rebuild the single unit test project (I have about 6000 unit tests in my single unit test project). Then it takes a long time to load the big unit test app (I assume the loader is busy resolving all the links to objects that I don’t need to load to test the code I am working on). The total time is a minimum of 5 minutes, some times a fair bit more just to test a 1 line change. All the while the fans on my laptop are buzzing at full tilt because of all the work that is being done.
I decided to break up my unit test project into feature based unit test projects. I am doing this as I go, so I still have the big unit test project, but i have broken out two seperate projects – one has about 600 unit tests and the other about 300. Now when I only make a change to my unit test code (so I dont have to recompile the big MVC dll), I can make a change and run the test in abou 3 – 5 seconds. None of the code changed, just the packaging. That is a huge difference. And the effect on productivity is massive.
So now that I know I can make my unit tests compile, and load faster with smaller projects, I am looking around to see how I should break up my MVC app into smaller projects. My expectation is that this will allow my change / test cycle to drop dramatically when the change includes a change in the project code and not just in the unit test code. I assume that rebuilding the changed (smaller) DLL will be so much faster than rebuilding the large single dll and beyond that, my unit test projects will be able to pick and choose the smaller dlls that need to be linked, so the link and load steps will be drastically shorter as well.
How would you answer the question you pose if you consider compiler performance as it relates to developer productivity?
If you have issues with compilation times, I would suggest breaking your code into module projects where a module is a vertical of functionality such as user management, payment processing, Ads etc. You may have projects such as the following: YourNamespace.Identity, YourNamespace.Payments, YourNamespace.Ads etc.. This is similar to what microservices use. They just extend on this concept and add their own database and API etc. I think one should organize solutions into what is most important for the domain. I don’t think any business domain cares much about controllers, models, DALs, enums, viewmodels etc.
It really depends. If you’re going to have other applications that use the same business logic, the same database schema/model, or the same utilities, it obviously makes sense to split those things out into class libraries.
One advantage that I see for creating multiple projects is that you prevent from accidental breaking the responsabilities of a layer.
For example, when I have to connect to a database I have to put it in the DAL project because only it has the references needed to do so. When we keep both BLL and DAL in the same project, all that is needed to connect is, maybe, just add an using statement.
When working in a team with different levels of developers this is a concern. Don’t you think?
I think that keeping this “hole” open to be abused lets you identify developers who break the rules (knowingly or otherwise). You can then either educate them or fire in severe cases where you think education won’t help. I think that truly ignorant developers can break your software without these “holes” (ex. catch an important exception and swallow). Code of developers (junior?) you do not trust should be reviewed and approved before going to production.
We have a huge legacy WebForms app written in VB. It is a hot mess. We want to slowly migrate to newer technologies and also convert to C#. We cannot do it all at once, so we are adding new functionality using C#/MVC, and while that works, I would rather move the MVC stuff out into its own separate project, but still reference it from the main application. This will not only reduce the size of the main assembly, it will cut down on compile times and make it easier to test.
So I think your blanket answer of “no” needs to be qualified. “No” for simple projects, “Maybe” for larger projects, depending on context.
I have 3 web projects that share the same dll project (Identity, PublicApi and WebApi)
The identity project, because of the domain user
The problem is that I have to publish them all together
Do you have any suggestion?
I am getting confused about to design a web architecture. I have an ERP project on my hand and looking for good architecture to design for that in .net core. Its like education erp like “Student Management System” Containing no of modules. So, can you help me out to design and architecture or give some suggesions. Can you provide me a link where I can found some sample to refere.
Love all your articles and videos keep up the great work. In many of your videos you use two folders Core and Persistence. If we move the contents of these folders into a MyApp.Core project how would a recommended folder structure look? Would the contents of the original Core folder like Models, Repository be at the root of the project along with a Persistence folder or do you recommend something else?
“I’m not entirely sure how this trend started but I’ve seen some developers split an ASP.NET MVC project into multiple projects”.
I would like to know just that. How this nonsense (Project per layer solution organization by default) started. Any ideas?
I think this started in Ruby on Rails and spread to other environments due to success of MVC.
Eehmm. Maybe because this is considered best practice for over more than 15 years.
I really cannot think of 1 good practical reason why not to do this. Multiple assemblies is not a practical disadvantage.
Thy only reason I read why putting everything in 1 project is ‘because separating does not any extra value’. Well, it does. Even if it is not necessary in the initial 1st version of a project, it still makes sense to put it in different assemblies. Why? Because business needs can change, and you might need to reuse parts of your application. Why woikd you want to put in extra effort and risk of breaking existibg software uf you coukd have setup it easily in multiple projects.
Besides that there are a lot of otger valid arguments, like better maintainability, less folders and classes per project, harder to make mistakes, better separation of concerns, etc etc.
You dont win anything by putting everything in 1 project. Its just typically such a purist religion thing.
Nice article, Mosh! I almost agree every word in this article. We have seen so many asp.net mvc based web system are split into multiple projects (WEB, BLL, DAL, etc). Besides this, in the ‘WEB’ project, the class files are further sub-organized by folders like ‘Controllers’, ‘Views’, ‘Models’, ‘Services’, etc. I believe the default project template for creating a new ASP.NET MVC in Visual Studio is the source of this wrong way.
Simple and great article Mosh.. Congrats.. Thank you..
After reading all the comments i am confuse. I need to use splinting of project or not ? I have one situation I am creating architecture of big product for travel domain and i have so many web module like hotel, flight, vehicle etc. and technology is dot net core.
I have created separate layers for core, infrastructure, dal, model, services and web.Now I want to design web project for different business. what is best preference use MVC are or spliting project and use machine config to share my session and cookie
You should always have layers in your project for Understand-ability, Changeability and/or Test-ability. There are two options by which you can achieve Layers.
1. By Namespaces/Packages
If you do not need to deploy layers on separate physical servers, then logical layers by Namespaces/Packages are enough.
2. By Class Library Projects
a-If you need to deploy layers on separate physical servers, then physical layers by Class Library Project are good candidate for it.
b- If your application exposes its functionality as Services so that other applications can consume them, then you should have two separate projects, One for UI and other for rest of the application (having Namespace based logical layers)
Can I deploy two different project ( MVC , Web Form ) via using same domain. & communicate between them with out using different domains name ?
I maintain an enterprise application and I do side work. Side work I prefer to keep this simple in one project.
Enterprise project. we have 3 different ways of getting data into the system
1) Web by user
2) RESTFul APIs in a separate project (with API keys etc)
3) File data in from FTP
Do you think the model should still exist in separate projects?
Hi, one other point that nobody mentioned is that if you have a complex application having all of the code in a single project would slow down the build time. If the logic is separated, build would detect only the projects that have changed and reuse the one that haven’t.
Thank you for this wonderful and well written piece. With this article, you have succeeded in putting my thoughts in one place.
For small applications you should manage layers by Namespaces/Packages. These are logical layers within One Project. Separate Class Library Layers should only be used when we need to deploy them on separate Physical Server. If we need to deploy them on Single Web Server, then
Namespace/Package based Layers are ideal for Separation Of Concerns. So, Separate Class Library Projects are not the only way to achieve Layers and Separation Of Concerns. We can achieve the same by Namespaces/Packages within in single project.
To me this is a no brainer.
I always split up my source code into multiple class libraries.
I disagree that this ‘does not add any extra value’. It does. I’ll explain why below.
But before that I’d say the opposite: even if it is not initially necessary to split the code in different class libraries, it doesnt hurt either. Sure, you have mire assemblies to deploy. So what? I dont see that as a disadvantage.
But also: what do I win by putting everything into 1 assembly? Nothing. That I have only 1 assembly instead of n assemblies? Theres just no practicsl advantage in this. Als o, in reality, you will always have multiple assemblies anyway, because of using 3rd party assemblies.
But suppose you think you dont need to split it in extra libraries, so you put it all in 1 assembly. And then, after a while you discover that you need to reuse 1 ‘layer’. Then you still need to extract it to a separare class library. That will take a lot of unnecessary time, and chances are you will make mistakes and break the software.
There are a lot of obvious reasons to split into separate class libraries: separation of concerns, reusability, minimize the amount of classes and folders in 1 class library.
Never saw a example of resuse? Really?
I’ve seen this a lot. From generic reusable libraries to different frontends sharing the same logic.
Better splitting into too much class libraries than too little.
Just my 2 cents.
This is sharing your valuable thoughts on it. This is really very useful for me. Keep it up with good work.
I have many Business Logic class libraries shared by WPF and ASP.NET solutions as well as Console Application batch jobs. The BLL class library consumes a DAL class library and a DTO class library. This occurs when all code is on the same client as well as when the application spans multiple networks and the consumption requires a web service. In fact, the same BLL, DAL, and DTO class libraries can be used in both scenarios. .NET makes it seamless.
This is one of the HOT Topic by seeing 93 comment response ..
It is still confusing weather to split project in multiple or not after seeing all comments.
But I always feel simplicity of coding is always good. As quoted by Mosh Hamedani (Coding Made Simple..)
So I will prefer to Simpler approach. There will be some problem in choosing the same but there will be solution for the same..
Thank you for this post. It is a point of seeing amidst a lot of cruft. I have been struggling with a new Asp.net Core application and went the route of separate projects per previous experiences outside of Core. I found myself facing bizarre compromises for no justifiable reason as the result of that initial choice.
Nice post. What about test projects? So far I haven’t seen any valuable reason to split the same way as the code, especially if there are tests that test integrations between core and separate deployables. I’m leaning towards one test project having both unit and integration tests (integration tests requiring env vars to run)
very good post.
I am migrating asp.net mvc 5 application to asp.net core mvc 5. This application is a very complex application using many third part libraries (syncfusion, devexpress, many other open sources i.e autofac, auto mapper, gridMvc, closedxml etc.
Migrating in one go is not feasible as many clients are using this and new client will start using this during migration.
is there any way , we can run two apps (asp.net mvc & asp.net mvc core) side by side and every sprint we will migrate few screens from old app to new app).
could you please recommend best approach in such a way that in browser , it will still look single application but server side we will have two virtual directories? i am not sure how request forwarding and authentication will work with this apparoch? any guidance will be very useful.
Putting it all together will only work in one scenario.
Putting it separately will always work in all scenarios.
Hello Mosh, thank you for your article, you have saved my life. I have to make an application with ASP.NET Core MVC for my University. You helped me.