Architecture patterns are design approaches that help organize and structure web applications for optimal maintainability, scalability and flexibility.
This post will explore one of the most widely adopted standards today: the onion architecture. Additionally, we will discuss the practical application of this pattern in an ASP.NET Core application.
Architecture patterns are high-level design solutions or models that provide a structured approach to organizing and designing the architecture of software systems.
These patterns offer a set of best practices and guidelines for solving common design problems that developers encounter while developing complex applications. Architectural patterns help software systems to be scalable, easy to maintain and adaptable to changing requirements.
Key features of architectural patterns include:
Reusability: Architectural patterns are reusable solutions that can be applied to different projects and domains. They encapsulate design expertise, making it easier for developers to apply proven design concepts.
Abstraction: Patterns provide a level of abstraction that focuses on the high-level structure and organization of a system rather than specific implementation details. This abstraction allows developers to think about system architecture in a more conceptual and general way.
Scalability: Architectural patterns are designed to accommodate future growth and changing requirements. They help ensure that a system can scale in functionality and performance.
Maintenance: By promoting a clear separation of concerns and modularization of components, architectural patterns facilitate the maintenance and extension of a software system over time.
Consistency: Patterns establish a consistent structure and design approach, which can be beneficial in team environments and large software projects.
Documentation: Standards come with a large amount of documentation and resources, which helps developers understand and apply them effectively.
In the context of ASP.NET Core, there are some widely used patterns, including:
In this post, we will learn about one of these patterns—the onion architecture pattern.
The onion architecture pattern is a software architecture pattern widely used in ASP.NET Core and other modern application development frameworks. It is a variation on traditional layered architecture that promotes a more flexible and sustainable way of designing and structuring applications. Jeffrey Palermo popularized the onion architecture pattern, which is particularly suitable and recommended for building robust, maintainable and testable applications.
The main idea of onion architecture is to organize the application into circles or concentric layers, with each layer depending only on the inner layers.
Next, let’s learn about and implement each of the four main layers of a typical onion architecture application in ASP.NET Core.
To create the sample project you need to have the following:
You can access the complete source code here: Source code.
At the end of the post, the complete project will have the following structure:
First, let’s create an ASP.NET Core solution project, where we will store the application layers. So, in the terminal run the following command:
dotnet new sln -n BookingFast
This is the most internal layer and contains the most critical part of the business logic, representing the core of the application, and must be completely independent of any external structures. In the core layer, you define your domain models, business rules and application-specific logic. This layer should have no dependencies on the other layers and is often called the “Domain” or “Entities” layer.
To create the domain layer in the project and add it to the solution class, use the following commands:
dotnet new classlib -n BookingFast.Domain
dotnet sln BookingFast.sln add BookingFast.Domain/BookingFast.Domain.csproj
Now inside the “BookingFast.Domain” folder, create a new folder called “Entities” and inside it create the following class:
namespace BookingFast.Domain.Entities;
public class Reservation
{
public Reservation(Guid id, string guestName, DateTime checkInDate, DateTime checkOutDate, string status)
{
Id = id;
GuestName = guestName;
CheckInDate = checkInDate;
CheckOutDate = checkOutDate;
Status = status;
}
public Guid Id { get; set; }
public string? GuestName { get; set; }
public DateTime CheckInDate { get; set; }
public DateTime CheckOutDate { get; set; }
public string? Status { get; set; }
}
The “Reservation” class is the main entity of our application, so it belongs to the domain layer, which is the innermost layer in an onion structure.
Next, let’s create the Infrastructure layer.
The infrastructure layer is responsible for interacting with external systems, structures and services. In the context of ASP.NET Core, this layer includes code related to data access, communication with external services and other infrastructure issues. This layer can have dependencies on external libraries, frameworks and ASP.NET Core itself.
To create the Infrastructure layer and add it to the solution, at the root of the project execute the following commands:
dotnet new classlib -n BookingFast.Infrastructure
dotnet sln BookingFast.sln add BookingFast.Infrastructure/BookingFast.Infrastructure.csproj
First, we need to download the dependencies to the infrastructure layer, so open a terminal in the infrastructure project and execute the following commands:
dotnet add package Microsoft.Extensions.Options.ConfigurationExtensions --version 8.0.0
dotnet add package MongoDB.Driver --version 2.22.0
Now, let’s create the class that will contain the variables responsible for storing the values of the database connection string.
Then, inside the “BookingFast.Infrastructure” folder, create a new folder called “Repositories.” Inside it, create a new class called “StudentDatabaseSettings” and place the following code in it:
namespace BookingFast.Infrastructure.Repositories;
public class ReservationsDatabaseSettings
{
public string ConnectionString { get; set; }
public string DatabaseName { get; set; }
public string CollectionName { get; set; }
}
In this example, we will create a database in MongoDB Atlas, which is a very simple database. To do this, you first need to create a server in MongoDB Atlas and create the database. If you’re new to MongoDB Atlas, I recommend this guide for creating and configuring your first cluster: MongoDB Atlas Getting Started (Atlas UI tab).
With the cluster configured, we can create a “reservation_db” database and a “students” collection as in the image below:
To connect our application to the cluster and access the created database, we need to obtain the connection string, which we will use later. To get it, just follow the steps shown in the images below:
In your database, click “Connect” > “Drivers” and in the window copy the connection string, as shown in the image below.
Now that we have the connection to the cluster, let’s implement the configuration in the project. Replace the code in the “appsettings.json” file of the “BookingFast.UI” layer with the code below:
{
"ReservationsDatabaseSettings": {
"ConnectionString": "<your cluster connection>",
"DatabaseName": "reservations_db",
"CollectionName": "reservations",
"IsSSL": true
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*"
}
In the code above, replace "<your cluster connection>"
with your previously obtained cluster connection. Also, remember to replace "<password>"
and "<username>"
with the cluster password and username.
Now let’s create the repository interface with the methods responsible for database operations.
In an onion architecture, a repository interface is usually found at the domain layer, as repositories are part of the data access logic and are a fundamental part of the application domain.
So, inside the “BookingFast.Domain” folder, create a new folder called “Infra.” Inside that, create another folder with the name “Interfaces” and add the following interface inside it:
using BookingFast.Domain.Entities;
namespace BookingFast.Domain.Infra.Interfaces;
public interface IReservationsRepository
{
Task<IEnumerable<Reservation>> FindAllReservations();
Task InsertReservation(Reservation reservation);
Task UpdateReservationStatus(string status);
}
To create the Repository class, we will use the infrastructure layer, so inside the “BookingFast.Infrastructure” folder, inside the “Repositories” folder create the class below:
ReservationsRepository
using BookingFast.Domain.Entities;
using BookingFast.Domain.Infra.Interfaces;
using Microsoft.Extensions.Options;
using MongoDB.Driver;
namespace BookingFast.Infrastructure.Repositories;
public class ReservationsRepository : IReservationsRepository
{
private readonly IMongoCollection<Reservation> _reservations;
public ReservationsRepository(IOptions<ReservationsDatabaseSettings> options)
{
var mongoClient = new MongoClient(options.Value.ConnectionString);
_reservations = mongoClient
.GetDatabase(options.Value.DatabaseName)
.GetCollection<Reservation>(options.Value.CollectionName);
}
public async Task<IEnumerable<Reservation>> FindAllReservations()
{
if (_reservations == null)
return Enumerable.Empty<Reservation>();
return await _reservations.Find(_ => true).ToListAsync();
}
public async Task InsertReservation(Reservation reservation)
{
await _reservations.InsertOneAsync(reservation);
}
public async Task UpdateReservationStatus(string status, Guid id)
{
var reservation = await _reservations.Find(a => a.Id == id).SingleOrDefaultAsync();
reservation.Status = status;
await _reservations.ReplaceOneAsync(a => a.Id == id, reservation);
}
}
Note that we use the “IReservationsRepository” interface, which is from the domain layer. To use it, we need to add the reference to the domain layer in the infrastructure layer. So, double-click on the file “BookingFast.Infrastructure.csproj” and add the code below to it:
<ItemGroup>
<ProjectReference Include="..\BookingFast.Domain\BookingFast.Domain.csproj" />
</ItemGroup>
The Infrastructure layer is ready, the next step is to implement the Application layer where we will create the service class.
The next concentric circle is the application layer, which depends on the domain layer but should also not have dependencies on external frameworks. This layer contains application-specific services, use cases and application logic. It acts as an intermediary between the omain layer and external layers such as the UI and infrastructure layers.
To create the application layer and add it to the solution, execute the following commands in the terminal, in the project root:
dotnet new classlib -n BookingFast.Application
dotnet sln BookingFast.sln add BookingFast.Application/BookingFast.Application.csproj
First, let’s create the Data Transfer Object classes (DTOs) that will be the classes exposed to the UI layer and that represent the entity model. In this case, it will be the “ReservationDto” class.
So, inside the “BookingFast.Application” folder, create a new folder called “Dtos” and inside it create a new class called “ReservationDto.” Place the code below in it:
using BookingFast.Domain.Entities;
namespace BookingFast.Application.Dtos;
public class ReservationDto
{
public ReservationDto() { }
public ReservationDto(Reservation reservation)
{
Id = reservation.Id;
GuestName = reservation.GuestName;
CheckInDate = reservation.CheckInDate;
CheckOutDate = reservation.CheckOutDate;
Status = reservation.Status;
}
public Guid Id { get; set; }
public string? GuestName { get; set; }
public DateTime CheckInDate { get; set; }
public DateTime CheckOutDate { get; set; }
public string? Status { get; set; }
}
Here we also need to add the dependencies of other layers, which are the domain and infrastructure, so double-click on the “BookingFast.Application” file and add the code below:
<ItemGroup>
<ProjectReference Include="..\BookingFast.Domain\BookingFast.Domain.csproj" />
<ProjectReference Include="..\BookingFast.Infrastructure\BookingFast.Infrastructure.csproj" />
</ItemGroup>
The next step is to create the service class and methods to perform bank operations through the infrastructure layer. Inside the “BookingFast.Application” folder, create a new folder called “Services” and inside it create the following interface and class:
using BookingFast.Application.Dtos;
namespace BookingFast.Application.Services;
public interface IReservationsService
{
Task<List<ReservationDto>> FindAllReservations();
Task CreateNewReservation(ReservationDto reservation);
Task UpdateReservationStatus(string status, Guid id);
}
using BookingFast.Application.Dtos;
using BookingFast.Domain.Entities;
using BookingFast.Domain.Infra.Interfaces;
namespace BookingFast.Application.Services;
public class ReservationsService : IReservationsService
{
private readonly IReservationsRepository _reservationsRepository;
public ReservationsService(IReservationsRepository reservationsRepository)
{
_reservationsRepository = reservationsRepository;
}
public async Task<List<ReservationDto>> FindAllReservations()
{
var reservations = await _reservationsRepository.FindAllReservations();
return reservations.Select(reservation => new ReservationDto(reservation)).ToList();
}
public async Task CreateNewReservation(ReservationDto reservation)
{
var newReservation = new Reservation(reservation.Id, reservation.GuestName, reservation.CheckInDate, reservation.CheckOutDate, reservation.Status);
await _reservationsRepository.InsertReservation(newReservation);
}
public async Task UpdateReservationStatus(string status, Guid id)
{
await _reservationsRepository.UpdateReservationStatus(status, id);
}
}
The outermost circle is the UI layer, which includes the application’s user interface components. In the context of ASP.NET Core, this layer includes controllers, views and other components responsible for handling HTTP requests, user input and UI rendering. The UI layer depends on the application and infrastructure layers, but should not contain any business logic. It mainly handles user interactions and invokes application services.
To create the UI project, run the command below:
dotnet new web -n BookingFast.UI
This command will create a new project using the ASP.NET Core Minimal APIs template. Next, run the following commands to add the “BookingFast.UI” project to the solution class:
dotnet sln BookingFast.sln add BookingFast.UI/BookingFast.UI.csproj
Now let’s add the reference to the application layer. Double-click in the “BookingFast.UI.csproj” file and add the following code snippet:
<ItemGroup>
<ProjectReference Include="..\BookingFast.Application\BookingFast.Application.csproj" />
</ItemGroup>
Then, let’s download the NuGet packages to the UI layer. Open a terminal in the UI project and execute the following commands:
dotnet add package Microsoft.AspNetCore.OpenApi --version 8.0.0
dotnet add package Swashbuckle.AspNetCore --version 6.5.0
The next step is to create the controller, which will call the service class methods and expose the data through the endpoints.
In the UI layer, create a new folder called “Controllers.” Inside it, create a new file called “ReservationsController.cs” and place the code below in it:
using BookingFast.Application.Dtos;
using BookingFast.Application.Services;
using Microsoft.AspNetCore.Mvc;
namespace BookingFast.UI.Controllers;
[ApiController]
[Route("[controller]")]
public class ReservationsController : Controller
{
private readonly IReservationsService _reservationsService;
public ReservationsController(IReservationsService reservationsService)
{
_reservationsService = reservationsService;
}
[HttpGet]
public async Task<ActionResult<List<ReservationDto>>> FindAllReservations()
{
var reservations = await _reservationsService.FindAllReservations();
return Ok(reservations);
}
[HttpPost]
public async Task<IActionResult> CreateNewReservation([FromBody] ReservationDto reservation)
{
await _reservationsService.CreateNewReservation(reservation);
return Ok();
}
[HttpPut("{id}")]
public async Task<IActionResult> UpdateReservationStatus(string status, Guid id)
{
await _reservationsService.UpdateReservationStatus(status, id);
return Ok();
}
}
The last step is to configure the dependency injection of the classes. In the “Program.cs,” file replace the existing code with the code below:
using BookingFast.Application.Services;
using BookingFast.Domain.Infra.Interfaces;
using BookingFast.Infrastructure.Repositories;
var builder = WebApplication.CreateBuilder(args);
builder.Services.Configure<ReservationsDatabaseSettings>(builder.Configuration.GetSection("ReservationsDatabaseSettings"));
builder.Services.AddSingleton<IReservationsRepository, ReservationsRepository>();
builder.Services.AddScoped<IReservationsService, ReservationsService>();
builder.Services.AddControllers();
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
app.Run();
To test the application, open a terminal in the UI project and execute the following command:
dotnet run
In the browser, access the address http://localhost:5202/swagger/index.html
and you can execute the operations in the Swagger interface, as shown in the GIF below:
In summary, the onion architectural pattern stands out as a notable approach to structuring and sustaining ASP.NET Core projects efficiently. Throughout this post, we explored the basics of this pattern and examined its practical application.
Whenever you create a new project, consider using the onion pattern. This way, you will not only take advantage of the structural advantages offered by the pattern, but you will also be investing in more readable, sustainable and easily maintained code.
]]>Today I’ll teach you how to design a confirmation dialog that asks for user approval and sends an email upon confirmation—all inside your ASP.NET Core app. This seamless connection of user interaction and backend processing improves the user experience and streamlines your program’s operation.
To implement this functionality, we will use the Telerik UI for ASP.NET Core Dialog component.
Progress Telerik UI for ASP.NET Core is a professional-grade UI library suite of more than 110 performance-optimized components that allow you to deliver high-quality applications faster. Its components include the Grid, Scheduler, Chart, Editor and many others, all of which may be customized to visualize and manage your data. It also includes pre-installed but changeable themes for a professional appearance and feel.
These components are HTML and Tag helpers that wrap the HTML/JavaScript Kendo UI widgets and transport them to .NET Core. It addresses all app needs for data management, performance, UX, design and accessibility, among other things.
Telerik UI for ASP.NET Core has excellent support and award-winning documentation, code examples and training. You can start quickly and generate real results in hours rather than months.
If you want to give it a shot, it comes with a free 30-day trial. It’s a terrific approach to assess if it meets your requirements and can help you deliver projects more quickly and efficiently. Whether you’re already using it or learning about it for the first time, Telerik UI for ASP.NET Core is worth a look!
Let’s learn how to use Telerik UI to add a sophisticated email-sending dialog to make your ASP.NET Core applications more engaging and efficient!
We started creating the interface in the Home controller and adding it to the main menu.
At the HomeController, we add the action for the SendEmailConfirmation:
1. public IActionResult SendEmailConfirmation()
2. {
3. ViewData["Message"] = "Choose your plan";
4.
5. return View();
6. }
In the folder Shared, at the file _Layout.cshtml, we add the option menu for SendEmailConfirmation:
1. <kendo-responsivepanel name="responsive-panel" auto-close="false" breakpoint="768" orientation="top">
2. @(Html.Kendo().Menu()
3. .Name("Menu")
4. .Items(items =>
5. {
6. items.Add().Text("Home").Action("Index", "Home", new { area = "" });
7. items.Add().Text("Contact").Action("Contact", "Home", new { area = "" });
8. items.Add().Text("Confirmation").Action("SendEmailConfirmation", "Home", new { area = "" });
9. })
10. )
11. </kendo-responsivepanel>
Creating the file view SendEmailConfirmation.cshtml in the View folder:
1.@{
2. ViewData["Title"] = "Available Plans";
3.}
4.
5.<section class= “jumbotron text-center”>
6. <div class="container">
7. <h1 class="jumbotron-heading">@ViewBag.Title.</h1>
8. <p class="lead-text text-muted">@ViewBag.Message</p>
9. </div>
10.</section>
11.
12.<div class="row mb-3">
13. <div class="col-md mt-3 mb-3">
14. <div class="k-card">
15. <div class="k-card-header">
16. Basic
17. </div>
18. <div class="k-card-body">
19. <p>Ideal for small businesses or startups, this plan offers a cost-effective solution to kickstart your marketing efforts. You’ll receive 4 custom-designed monthly content cards, perfect for enhancing your social media presence and engaging with your audience. This plan includes basic customization options to align with your brand identity.</p>
20. </div>
21. <div class="k-card-actions">
22.
23. </div>
24. </div>
25. </div>
26.
27.
28. ...
We need to add the options to the end user’s selection plan. For convenience, I’ll add only the basic plan code:
At the div <div class="k-card-actions">
we add the buttons Cancel and Confirm:
1. @(Html.Kendo().Button()
2. .Name("cancelBasic")
3. .Content("Cancel")
4. .Events(ev => { ev.Click("closePage"); }))
5.
6. @(Html.Kendo().Button()
7. .Name("confirmBasic")
8. .Content("Confirm")
9. .ThemeColor(ThemeColor.Primary)
10. .HtmlAttributes(new { type = "button", param = "BASIC" })
11. .Events(ev => { ev.Click("sendConfirmation"); }))
The confirmBasic button adds the plan type string in the param attribute using the HtmlAttributes() method.
The primary color depends on the theme that is used. Add it to the button using the method ThemeColor(), with the property ThemeColor.Primary.
We repeat the process for the pro and premium plans.
Let’s create the Dialog:
1.@(Html.Kendo().Dialog()
2. .Name("dialog")
3. .Title("Plan confirmation")
4. .Width(400)
5. .Modal(true)
6. .Visible(false)
7. .Actions(actions =>
8. {
9. actions.Add().Text("Cancel").Action("closePage");
10. actions.Add().Text("Send").Action("sendEmail").Primary(true);
11. }))
In this code, we added the Visible(false)
option that will prevent the Dialog from showing up on page load.
We also configured our action buttons. We have the Cancel and the Send buttons in this sample. The actions specified for the buttons are defined in the JavaScript functions below:
1.<script>
2. let typePlan = "";
3.
4. function sendEmail() {
5.
6. var TypePlan = typePlan;
7.
8. $.ajax({
9. url: '/SendEmail/SendEmail',
10. type: ‘POST’,
11. data: { TypePlan: TypePlan },
12. success: function (response) {
13. $('#dialogSuccess').data("kendoDialog").open();
14. },
15. error: function (error) {
16. $('#dialogError').data("kendoDialog").open();
17. }
18. });
19. }
20. function sendConfirmation() {
21. typePlan = this.element.attr("param");
22. $('#dialog').html("<p>You are about to confirm your " + typePlan + " plan.</p>");
23. $('#dialog').data("kendoDialog").open();
24. }
25.
26. function closePage() {
27. window.location.href = '@Url.Action("Index", "Home")';
28. }
29.</script>
In the sendConfirmation()
function, we read the type plan from the param
attribute, add to the Dialog the HTML content for the message with the plan’s name and call the open()
method to show the dialog.
Adding extra dialogs for success and error while sending the email confirmation:
1.@(Html.Kendo().Dialog()
2. .Name("dialogSuccess")
3. .Title("Success")
4. .Content("<p>Your subscription confirmation was send to your email account.<p>")
5. .Width(400)
6. .Modal(true)
7. .Visible(false)
8. .Actions(actions =>
9. {
10. actions.Add().Text("Close").Action("closePage").Primary(true);
11. }))
12.
13.@(Html.Kendo().Dialog()
14. .Name("dialogError")
15. .Title("Error")
16. .Content("<p>Occured an error sending your subscription to your email account. Please, try again later.<p>")
17. .Width(400)
18. .Modal(true)
19. .Visible(false)
20. .Actions(actions =>
21. {
22. actions.Add().Text("Close").Action("closePage").Primary(true);
23. }))
In this case, we create a specific Controller SendEmailController.cs that is called from JavaScript ajax:
1.$.ajax({
2. url: '/SendEmail/SendEmail',
Source code for SendEmailController.cs:
1.using Microsoft.AspNetCore.Mvc;
2.using System.Net.Mail;
3.using TelerikAspNetCoreApp1.CodeCS;
4.
5.namespace TelerikAspNetCoreApp1.Controllers;
6.public class SendEmailController : Controller
7.{
8. public IActionResult SendEmail(string typePlan)
9. {
10. try
11. {
12. var email = new EmailSender(new SmtpClient());
13.
14. email.SendEmail("customer@email.com", "Confirmation " + typePlan, "This is the confirmation for your plan " + typePlan);
15.
16. return StatusCode(StatusCodes.Status200OK);
17. }
18. catch
19. {
20.#if (DEBUG)
21. return StatusCode(StatusCodes.Status200OK);
22.#else
23. return StatusCode(StatusCodes.Status500InternalServerError);
24.#endif
25. }
26. }
27.}
We can create a folder called CodeCS and add the EmailSender
class:
1.namespace TelerikAspNetCoreApp1.CodeCS;
2.
3.using System.Net.Mail;
4.
5.public class EmailSender
6.{
7. private SmtpClient _smtpClient;
8.
9. public EmailSender(SmtpClient smtpClient)
10. {
11. _smtpClient = smtpClient;
12. }
13.
14. public void SendEmail(string emailTo, string subject, string message)
15. {
16. var mailMessage = new MailMessage("noreply@progress.com", emailTo, subject, message);
17. _smtpClient.Send(mailMessage);
18. }
19.}
To make this code work, add your SmtpClient()
parameters, adjust the MailMessage
, and get the emailTo
from your current logged user.
This is the main page for choosing the plan:
Pressing Confirm on the Premium box raises the dialog confirmation:
Pressing Send raises the success message:
Check out the source code on my GitHub, and you can fork it anytime.
With Telerik UI for ASP.NET Core, it’s easy for dialogs to send events to actions. Users can get more done in less time because they can access many different tasks right away.
Sign up for a free trial on the Telerik website to start making valuable data solutions immediately. Even during your free trial, you’ll get help from the Telerik team—the best in the business.
]]>GitHub Copilot revolutionizes coding as your AI pair programmer, helping you craft code faster and more effectively. It intuitively understands the context from your comments and code, swiftly suggesting individual lines and complete routines.
This innovative tool springs from a collaborative effort between GitHub, OpenAI and Microsoft, powered by a cutting-edge generative AI model. Copilot analyzes not just your current file but also linked files as you code, offering smart autocomplete suggestions in your text editor.
Image from a custom prompt in Leonardo.ai
This tool isn’t just about speed—it’s transforming the programming landscape by making coding more efficient and accessible. GitHub’s research highlights Copilot’s significant role in boosting developer productivity and satisfaction. It’s breaking down barriers to software development for newcomers and smoothing out the challenges of writing basic code.
The impact of generative AI on the economy will be monumental. Developers and businesses are already embracing AI-powered coding tools like GitHub Copilot, marking a new era in software development.
GitHub Copilot is a vital tool that can assist us, as developers, with writing code more quickly and efficiently. It can revolutionize the programming world by making coding more approachable and lowering the entrance barrier for new developers.
In this article, I’ll demonstrate how GitHub Copilot can help produce code using GitHub Copilot and explain how I got the desired result. (This post was written in December 2023.)
I’ve been actively using GitHub Copilot for four months, experiencing its capabilities firsthand. Let’s say you’re already familiar with the paid version of ChatGPT-4 or Bing Chat, which incorporates ChatGPT-4 connected to the web. You’ll notice that GitHub Copilot’s code generation capabilities are similar to these tools. Interestingly, when one tool struggles to generate code, others often face the same challenge.
GitHub Copilot seamlessly integrates with both Visual Studio 2022 and Visual Studio Code. A notable development was introducing a new feature for Visual Studio 2022 Preview, launched in December 2022. This enhancement significantly enriches the Visual Studio IDE experience by adding commit descriptions.
I have 30 years of experience designing thousands of components and an AI framework model based on database structure with self-awareness of relations of fields and tables created with mixed artificial intelligence. With it, one of the results was a chat with the database where the end user can ask anything about the stored data and get a reply that is 90% accurate.
This is my experience with AI, so when ChatGPT was launched, I saw the potential for artificial assistance, like GitHub Copilot.
This introduction I did was to bring you to my point of view and say:
If you have the necessary background, know the names of the components and technologies, and understand how to envision a machine “thinking” with the commands it receives, you will become an excellent prompt engineer.
It depends on you. How long will you practice the prompts to find the best solution for your solutions, projects and problems? I spent four hours testing prompts for writing this post, asking for GitHub Copilot to deliver code that satisfies my requirements.
The requirement was to generate a model with EF Core commands to create a form to read these fields using Kendo UI and the code for the Controller to send an email. For validations, the date of birth should be greater than 01/01/1900 to DateTime.Now, and return a date greater than DateTime.Now+1, validating email with regex.
From my experience, I observed that what is new in design, languages and patterns is not generated very well. But as of the last update of ChatGTP-4, which is currently April 2023, if the technology is older, the result is better than the new ones.
Let’s start with what does not work exceptionally well with GitHub Copilot:
Here are some of the things GitHub Copilot does well with Telerik UI frameworks:
I started creating a prompt for the model. Let’s see this prompt below:
Please generate a C# code with these requirements: Create a model Admission in namespace TelerikAspNetCoreApp1.Model, add display properties for the fields: Id as key and GUID type autogenerate, Name as unique max 100 chars, Date Birthday, Email, Subject max 255, Description max 4096, Return Date.
Please add EF Core validations for Date Birthday greater than 01/01/1900 to DateTime.Now, and return date greater than DateTime.Now+1, validate email with regex.
Structure of the prompt:
Below is the result for the model (the project is available on my GitHub):
1.using System.ComponentModel.DataAnnotations;
2.using System.ComponentModel.DataAnnotations.Schema;
3.
4.public class Admission
5.{
6. [Key]
7. [DatabaseGenerated(DatabaseGeneratedOption.Identity)]
8. public Guid Id { get; set; }
9.
10. [Required]
11. [StringLength(100)]
12. [Index(IsUnique = true)]
13. public string Name { get; set; }
14.
15. [Required]
16. [DataType(DataType.Date)]
17. [Display(Name = "Date Birthday")]
18. [CustomValidation(typeof(Admission), "ValidateDateOfBirth")]
19. public DateTime DateBirthday { get; set; }
20.
21. [Required]
22. [DataType(DataType.EmailAddress)]
23. [RegularExpression(@"^[\w-]+(\.[\w-]+)*@([\w-]+\.)+[a-zA-Z]{2,7}$", ErrorMessage = "Invalid Email Format")]
24. public string Email { get; set; }
25.
26. [StringLength(255)]
27. public string Subject { get; set; }
28.
29. [StringLength(4096)]
30. public string Description { get; set; }
31.
32. [Required]
33. [DataType(DataType.Date)]
34. [Display(Name = "Return Date")]
35. [CustomValidation(typeof(Admission), "ValidateReturnDate")]
36. public DateTime ReturnDate { get; set; }
37.
38. public static ValidationResult ValidateDateOfBirth(DateTime dateOfBirth, ValidationContext context)
39. {
40. if (dateOfBirth < new DateTime(1900, 1, 1) || dateOfBirth > DateTime.Now)
41. {
42. return new ValidationResult("Date Birthday must be greater than 01/01/1900 and less than current date.");
43. }
44.
45. return ValidationResult.Success;
46. }
47.
48. public static ValidationResult ValidateReturnDate(DateTime returnDate, ValidationContext context)
49. {
50. if (returnDate <= DateTime.Now.AddDays(1))
51. {
52. return new ValidationResult("Return Date must be greater than current date plus one day.");
53. }
54.
55. return ValidationResult.Success;
56. }
57.}
58.
There was only one mistake: The code was generated without the namespace. But I solved this by just adding the namespace:
1. <![endif]>namespace TelerikAspNetCoreApp1.Model;
Some things I learned:
I started creating a prompt for the form. Let’s see this prompt below:
Please using Kendo UI for jQuery with Html Helper style for ASP.NET Core MVC project create a Kendo UI form to post in Admission Home Controller. Using the model Admission, please build all fields using Kendo UI, and labels for each field, and with respective validation. For posting the form generate Admission that must get all data from the model and send to an email from consts pEmailFrom to pEmailTo defining a fake value for the consts.”
Structure of the prompt:
Below is the result for the model (the project is available on my GitHub):
1.@model TelerikAspNetCoreApp1.Model.Admission
2.@{
3. ViewData["Title"] = "Admission";
4.}
5.
6.@using (Html.BeginForm(“Admission”, “Home”, FormMethod.Post))
7.{
8. @Html.AntiForgeryToken()
9.
10. <div class="form-horizontal">
11. <h4>Admission</h4>
12. <hr />
13. @Html.ValidationSummary(true, "", new { @class = "text-danger" })
14.
15. <div class="form-group">
16. @Html.LabelFor(model => model.Name, htmlAttributes: new { @class = "control-label col-md-2" })
17. <div class="col-md-10">
18. @(Html.Kendo().TextBoxFor(model => model.Name)
19. .HtmlAttributes(new { @class = "k-textbox" }))
20. @Html.ValidationMessageFor(model => model.Name, "", new { @class = "text-danger" })
21. </div>
22. </div>
23.
24. <div class="form-group">
25. @Html.LabelFor(model => model.DateBirthday, htmlAttributes: new { @class = "control-label col-md-2" })
26. <div class="col-md-10">
27. @(Html.Kendo().DatePickerFor(model => model.DateBirthday)
28. .HtmlAttributes(new { @class = "k-datepicker" }))
29. @Html.ValidationMessageFor(model => model.DateBirthday, "", new { @class = "text-danger" })
30. </div>
31. </div>
32.
33. <div class="form-group">
34. @Html.LabelFor(model => model.Email, htmlAttributes: new { @class = "control-label col-md-2" })
35. <div class="col-md-10">
36. @(Html.Kendo().TextBoxFor(model => model.Email)
37. .HtmlAttributes(new { @class = "k-textbox" }))
38. @Html.ValidationMessageFor(model => model.Email, "", new { @class = "text-danger" })
39. </div>
40. </div>
41.
42. <div class="form-group">
43. @Html.LabelFor(model => model.Subject, htmlAttributes: new { @class = "control-label col-md-2" })
44. <div class="col-md-10">
45. @(Html.Kendo().TextBoxFor(model => model.Subject)
46. .HtmlAttributes(new { @class = "k-textbox" }))
47. @Html.ValidationMessageFor(model => model.Subject, "", new { @class = "text-danger" })
48. </div>
49. </div>
50.
51. <div class="form-group">
52. @Html.LabelFor(model => model.Description, htmlAttributes: new { @class = "control-label col-md-2" })
53. <div class="col-md-10">
54. @(Html.Kendo().TextAreaFor(model => model.Description)
55. .HtmlAttributes(new { @class = "k-textarea" }))
56. @Html.ValidationMessageFor(model => model.Description, "", new { @class = "text-danger" })
57. </div>
58. </div>
59.
60. <div class="form-group">
61. @Html.LabelFor(model => model.ReturnDate, htmlAttributes: new { @class = "control-label col-md-2" })
62. <div class="col-md-10">
63. @(Html.Kendo().DatePickerFor(model => model.ReturnDate)
64. .HtmlAttributes(new { @class = "k-datepicker" }))
65. @Html.ValidationMessageFor(model => model.ReturnDate, "", new { @class = "text-danger" })
66. </div>
67. </div>
68.
69. <div class="form-group">
70. <div class="col-md-offset-2 col-md-10">
71. <input type="submit" value="Create" class="btn btn-default" />
72. </div>
73. </div>
74. </div>
75.}
Code for Home Controller:
1.private const string pEmailFrom = "fake@example.com";
2.private const string pEmailTo = "destination@example.com";
3.
4.[HttpPost]
5.[ValidateAntiForgeryToken]
6.public ActionResult Admission(Admission admission)
7.{
8. if (ModelState.IsValid)
9. {
10. // Send email
11. var mail = new MailMessage(pEmailFrom, pEmailTo)
12. {
13. Subject = admission.Subject,
14. Body = $"Name: {admission.Name}\nDate Birthday: {admission.DateBirthday}\nEmail: {admission.Email}\nDescription: {admission.Description}\nReturn Date: {admission.ReturnDate}"
15. };
16.
17. var smtpServer = new SmtpClient("smtp.example.com");
18. smtpServer.Send(mail);
19.
20. return RedirectToAction("Index");
21. }
22.
23. return View();
24.}
Its first response was lazy, indicating the need to add the other fields. People say ChatGPT-4 is becoming lazy, and GitHub Copilot is too.
So, I did another prompt:
For all fields, please.
Below is the form running on the browser:
Here is the demonstration of the validations for the fields:
GitHub Copilot is in development, in my opinion, and has space for growing in quality. If you need to start something new or rewrite legacy projects, you have a good assistant. Maybe the GitHub Copilot is missing something from Kendo UI components but it generates a good percentage of the code.
For success with Telerik UI and GitHub Copilot, you need to spend some time testing and learning how to prompt, and the rule is no pain, no gain.
Maybe in the future someone will create a plugin for ChatGPT-4—could it be you? Or perhaps Microsoft will evolve GitHub Copilot for better integration.
]]>In Part 1 of Data Structures, we saw some examples of basic structures in the ASP.NET Core context, what each one means and how it can be implemented. In this second part, we’ll cover the main advanced topics in data structures.
Let’s see the meaning of each of them and understand how they work through practical examples.
They are complex, specialized data organizations that provide efficient methods for storing and manipulating data across a variety of computational tasks. These frameworks are designed to optimize specific operations such as retrieval, insertion and deletion, and generally have applications in a wide variety of scenarios.
Below are some examples of advanced data structures:
Tree data structures
Examples: Binary trees, AVL trees, red-black trees, B trees and others
Heap data structures
Examples: Binary heaps, Fibonacci heaps and binomial heaps
Hashing
Examples: Hash tables, hash functions
These advanced data structures are essential in diverse computer science and software engineering applications where data management, efficient search and algorithm optimization are key concerns.
They provide the foundation for solving complex problems and improving the performance of software systems in areas such as databases, operating systems, networks and many others. Understanding when and how to apply these frameworks is crucial to designing efficient algorithms and data management systems.
Trees are a type of data structure used to represent hierarchical relationships between elements. Trees are widely used in programming for various purposes, and C# offers the flexibility to work with different types of trees.
Next, let’s check out the main types of trees in C# and implement an example of each.
Binary trees are data structures where each node has at most two child nodes, typically referred to as the left and right child. Binary trees can be used in various applications, including binary search and expression trees.
Below is a representation of a binary tree structure:
To practice the post examples, let’s create a new application in ASP.NET Core. So, execute in the terminal the following command:
dotnet new web -o PracticingDataStructurePartTwo
This command will create a folder called “PracticingDataStructurePartTwo” and inside it will be a basic web project using the Minimal API template. You can open the project with the IDE of your choice, in this example Visual Studio Code will be used.
You can access all code examples here: Sample source code.
Next, let’s create a class that represents a binary tree node. Each node must have data and references to its left and right children. So, in the root of the project, create a new folder called “Models” and inside it create the class below:
namespace PracticingDataStructurePartTwo.Models;
public class TreeNode
{
public int Data { get; set; }
public TreeNode Left { get; set; }
public TreeNode Right { get; set; }
public TreeNode(int data)
{
Data = data;
Left = null;
Right = null;
}
}
Note that in the class above we defined a class to represent each node in the tree, which has data and its left and right pointers, as represented in the previous image.
We can perform the following operations on binary trees:
Tree traversal algorithms can be classified into two main categories:
Traversing a binary tree means visiting each tree node in a specific order. There are different ways to traverse a binary tree, and the choice of traversal method depends on the specific task you want to perform. The three most common binary tree traversal methods are:
To implement an in-order traversal binary tree, let’s create a binary tree class. Inside the Model folder create the class below:
namespace PracticingDataStructurePartTwo.Models;
public class BinaryTree
{
public TreeNode Root { get; set; } // Reference to the root node
public BinaryTree()
{
Root = null;
}
// Insert a node with the specified data
public void Insert(int data)
{
Root = InsertRecursive(Root, data);
}
// Recursive method to insert a node
private TreeNode InsertRecursive(TreeNode root, int data)
{
// If the current node is null, create a new node with the data
if (root == null)
{
root = new TreeNode(data);
return root;
}
// If the data is less than the current node's data, insert on the left
if (data < root.Data)
{
root.Left = InsertRecursive(root.Left, data); // Left child
}
// If the data is greater, insert on the right
else if (data > root.Data)
{
root.Right = InsertRecursive(root.Right, data); // Right child
}
return root;
}
// In-order traversal of the binary tree
public void InorderTraversal(TreeNode node)
{
if (node != null)
{
InorderTraversal(node.Left); // Traverse left subtree
Console.Write(node.Data + " "); // Print current node's data
InorderTraversal(node.Right); // Traverse right subtree
}
}
}
In the code above we defined the BinaryTree
class, which is responsible for representing a binary tree data structure. It manages the root node of the tree and provides methods for inserting nodes into the tree and performing in-order traversal. Below is a detailed explanation of each element of the code:
Root propertyRoot
is a property of the BinaryTree
class. It contains a reference to the root node of the binary tree.
Builder
The constructor of the BinaryTree
class initializes the Root
property to null
, indicating that the tree is initially empty.
Insert Method
Insert
method is used to insert a new node with a specific integer value into the binary tree.null
, it creates a new node with the given data and returns it.null
, the method calls itself recursively on the left or right child depending on whether the data is smaller or larger than the current node’s data, ensuring that the new node is inserted correctly.Overall, the “BinaryTree” class encapsulates the core functionality of a binary tree, including creating the tree, inserting nodes and traversing the tree in order.
Now in the Program class file add the code below before the code snippet app.run();
:
BinaryTree tree = new BinaryTree();
// Insert nodes into the binary tree
tree.Insert(5);
tree.Insert(3);
tree.Insert(2);
tree.Insert(4);
tree.Insert(1);
tree.Insert(6);
Console.WriteLine("Inorder Traversal:");
tree.InorderTraversal(tree.Root);
Then, execute the application with the command dotnet run
, and you will have the following output in the console:
Note that even though the list of numbers is unordered, it was sorted in increasing order in the InorderTraversal()
method.
Binary search is a search algorithm that can be applied to a sorted array or a binary search tree (BST). To demonstrate binary search in the binary tree created above, simply add the method below to the “BinaryTree” class:
public bool BinarySearch(TreeNode node, int target)
{
// Base case: If the node is null, the target is not found.
if (node == null)
{
return false;
}
// Compare the target value with the current node's data.
if (target == node.Data)
{
return true; // Found the target value.
}
else if (target < node.Data)
{
// If the target is smaller, search in the left subtree.
return BinarySearch(node.Left, target);
}
else
{
// If the target is larger, search in the right subtree.
return BinarySearch(node.Right, target);
}
}
And in the Program.cs file replace the code snippet:
tree.Insert(1);
tree.Insert(2);
tree.Insert(3);
tree.Insert(4);
tree.Insert(5);
tree.Insert(6);
by the following:
tree.Insert(50);
tree.Insert(30);
tree.Insert(70);
tree.Insert(20);
tree.Insert(40);
tree.Insert(60);
tree.Insert(80);
and add the following code:
int target = 40;
bool found = tree.BinarySearch(tree.Root, target);
if (found)
Console.WriteLine($"Value {target} found in the tree.");
else
Console.WriteLine($"Value {target} not found in the tree.");
Then, execute in the terminal the command dotnet run
and you’ll have the following result:
In the code above, we defined the BinarySearch()
method that searches for a target value in the binary search tree recursively.
The base if
checks if the current node is null
, which means the target value was not found and returns false.
If the current node’s data matches the target value, it returns true
, indicating that the target value has been found. If the target is smaller than the current node’s data, it searches the left subtree. If the target is larger, it searches in the right subtree.
When calling the method, we passed as target the value 40 which is part of our tree that has the values = 50,30,70,20,40,60 and 80, so as expected the console output was: “Value 40 found in the tree.” As shown in the image below:
A heap in data structures refers to a specialized data structure that is used to store and manage elements in a way that allows the highest priority element to be easily accessed and removed.
There are two main types of heaps: the “binary heap” and the “Fibonacci heap.” In this post, we will talk about the binary heap, which is the most common.
In C#, a binary heap is often implemented using a class called PriorityQueue in the System.Collections.Generic library.
Next, let’s see how it works and how to implement a binary heap:
A binary heap is a special binary tree that meets two main properties:
In binary heaps, we have two main types:
The choice between using a max-heap or a min-heap depends on the needs of your algorithm or application. Here are some typical scenarios for each of the two types:
Max-heap:
Min-heap:
Implementing a max-heap or min-heap in C# can be accomplished using a class like PriorityQueue
from the System.Collections.Generic
library, as mentioned previously. However, note that by default PriorityQueue
creates a min-heap. If you want a max-heap, you can provide a custom comparison that reverses the order of priorities.
Next, let’s implement a min-heap using the C# native class PriorityQueue. So in the Program.cs class, add the code below:
var queue = new PriorityQueue<string, int>();
// Add elements with their associated priorities
queue.Enqueue("Red", 0);
queue.Enqueue("Blue", 4);
queue.Enqueue("Green", 2);
queue.Enqueue("Gray", 1);
// Dequeue and print elements based on their priorities
while (queue.TryDequeue(out var color, out var priority))
Console.WriteLine($"Color: {color} - Priority: {priority}");
The code above is an example of how to use a priority queue in C# to store elements with associated priorities. First, we create an instance of a priority queue (or PriorityQueue
) that is capable of storing string values (colors) with integer-valued priorities.
Then we add some elements to the priority queue. Each element represents a color, such as Red, Blue, Green and Gray, and each element has an associated priority, which is an integer, such as 0, 4, 2 and 1.
We then enter a loop (while
) that will continue until the priority queue is empty. Inside the loop, we dequeue (or “dequeue”) elements from the priority queue using queue.TryDequeue()
returning the value and priority.
Something important is that, in the “PriorityQueue” class, elements with the lowest priority are removed from the queue first.
Finally, we print the removed element to the screen. If you run the application you will have the following result:
Below you can see a representation of a binary min-heap
Hashing is a fundamental process in computer science, which involves transforming input data into a fixed-size value, often called a “hash” or “hash code,” using a mathematical function known as a hash function.
The main feature of hash functions is that they produce a fixed-size output regardless of the size of the input. This hash is used to represent the original input in a compact way, facilitating efficient searching and storing of data in data structures such as hash tables.
The biggest advantage of using hashing data structures is that they allow you to store data and search it in constant time, that is, in O(1) time.
Hash functions must meet some important properties:
The hash is basically made up of three components:
To implement the creation of a hash table in C#, we can use the “Hashtable” class which is part of the “System.Collections” set.
So, in the Program.cs file add the code below:
Hashtable hashtable = new Hashtable();
// Implementation a hash function using SHA256
int HashFunction(string key)
{
using (SHA256 sha256 = SHA256.Create())
{
byte[] inputBytes = Encoding.UTF8.GetBytes(key);
byte[] hashBytes = sha256.ComputeHash(inputBytes);
return BitConverter.ToInt32(hashBytes, 0); // Convert the first 4 bytes of the hash to an integer
}
}
//Add key-value pairs to the table
hashtable[HashFunction("2023001")] = "Bob";
hashtable[HashFunction("2023002")] = "Alice";
hashtable[HashFunction("2023003")] = "John";
// Retrieve values using keys
int key1 = HashFunction("2023001");
int key2 = HashFunction("2023002");
int key3 = HashFunction("2023003");
string value1 = (string)hashtable[key1];
string value2 = (string)hashtable[key2];
string value3 = (string)hashtable[key3];
Console.WriteLine("Index associated with key 2023001: " + key1);
Console.WriteLine("Index associated with key 2023002: " + key2);
Console.WriteLine("Index associated with key 2023003: " + key3);
Console.WriteLine("Value associated with key 2023001: " + value1);
Console.WriteLine("Value associated with key 2023002: " + value2);
Console.WriteLine("Value associated with key 2023003: " + value3);
Then, if you run the application, the following result will be displayed in the console:
In the code above we create an instance of a hash table (Hashtable) then create a method HashFunction(string key)
that accepts a key (string) as input and returns an integer value. This method uses the SHA-256 hash function to calculate the hash value of the key.
Inside the HashFunction
method, an instance of SHA256, a cryptographic hash algorithm, is created.
The key (string) is then converted to a byte array using UTF-8 encoding.
The ComputeHash
function of the SHA256 object is used to calculate the hash of the key bytes.
The first 4 bytes of the hash are converted to an integer value using BitConverter.ToInt32
, which is then returned.
We then add key-value pairs to the hash table, where the keys are the outputs of the HashFunction
applied to the strings “2023001”, “2023002” and “2023003”, and the associated values are “Bob”, “Alice” and “John”.
The code then retrieves the values from the hash table using the keys calculated with the HashFunction
function. The values associated with the keys are stored in the variables value1, value2 and value3.
Finally, we print to the console the indices associated with the keys and the values associated with these keys in the hash table.
In this way, the code demonstrates how hash tables are used to map keys to values using a hash function, illustrating the concept of hashing, which is one of the main subjects when we talk about complex data structures.
In this post, we learned about three types of complex data structures: binary trees, heap and hashing. These concepts are very common in web applications, where there is a need to deal with a large amount of data, and in scenarios like these, it is common to have problems with optimizations that can be easily solved by data structures.
In addition to these, there are several other types of data structures that you may know, but these three are already a starting point for you to familiarize yourself with the subject.
Something important to remember is that C# has many features for working with data structures such as the “Hashtable” class, for example, so consider using native ASP.NET Core structures whenever possible.
]]>ASP.NET Core development relies on manipulating, storing and retrieving data. A solid understanding of data structures allows developers to design and implement web applications that can handle large volumes of data and quickly respond to requests, scaling the applications efficiently.
Whether managing user sessions, caching data or optimizing database queries, data structures play a crucial role in improving the performance and reliability of ASP.NET Core applications, making them more competitive and capable of meeting the demands of modern web development.
In this post, we will cover the simplest topics in data structure in the context of ASP.NET Core, implementing an example of each and understanding each approach’s meaning.
Data represents a unit or element of information and can take different forms such as text, numbers, dates, images, videos and others.
ASP.NET Core uses the C# (C-Sharp) programming language, which has a wide variety of data types. The main types are:
- Primitive types:
- Composite types:
Dictionary<TKey, TValue>
class represents a collection of key-value pairs, where each key is unique. It allows efficient retrieval of values based on their keys.Data structures are a fundamental concept in computer science and programming. They are a way of organizing and storing data in a structured and efficient way so that it can be accessed and manipulated easily.
Data structures define how data is organized, stored and operated in a computer’s memory.
Next, we will understand the concept behind each of these data structures and how they can be implemented in C# within the context of ASP.NET Core. You can access the source code of the examples here: Practicing Data Structure source code.
An array is a data structure used to store a collection of elements of the same data type under a single variable name. These elements are stored in contiguous memory locations, making their access and manipulation efficient. Each element in an array is identified by an index or position, starting from 0 for the first element, 1 for the second, and so on.
Arrays are commonly used to store lists of data, such as numbers, strings or objects. They offer several advantages, including:
However, arrays also have some limitations, mainly with regard to size. In some scenarios, it may be necessary to use more complex data structures, such as dynamic lists.
Different programming languages have their own syntax for defining and working with arrays.
In C#, arrays can be declared as follows:
int[] intArray = { 3, 5, 10, 7, 12 }; // Integer type element
string[] stringArray = { "Apple", "Banana", "Cherry", "strawberry", "Fig" }; // Sting type element
int[] definedSizeArray = new int[5] { 3, 5, 10, 7, 12 }; // Array with defined size
int[] dynamicSizeArray = new int[] { }; // Array with dynamic size
- One-dimensional arrays (1-D arrays):
Also known as a vector, it is an array with only one dimension. Elements are organized linearly.
int[] vector = new int[5]; // Creates an array with 5 integer elements
vector[0] = 1;
vector[1] = 2;
vector[2] = 3;
vector[3] = 4;
vector[4] = 5;
- Two-dimensional arrays (2-D arrays):
Also called a matrix, a two-dimensional array is organized into rows and columns. It can be viewed as a table or grid.
int[,] matrix = new int[3, 3]; // Creates a 3x3 matrix of integers
matrix[0, 0] = 1;
matrix[0, 1] = 2;
matrix[0, 2] = 3;
matrix[1, 0] = 4;
matrix[1, 1] = 5;
matrix[1, 2] = 6;
matrix[2, 0] = 7;
matrix[2, 1] = 8;
matrix[2, 2] = 9;
- Three-dimensional array (3-D arrays):
A three-dimensional array is organized into layers, rows and columns. It can be useful for representing three-dimensional data, such as cubes.
int[,,] cube = new int[3, 3, 3]; // Creates a 3x3x3 cube of integers
cube[0, 0, 0] = 1;
cube[0, 0, 1] = 2;
cube[0, 0, 2] = 3;
cube[0, 1, 0] = 4;
cube[0, 1, 1] = 5;
cube[0, 1, 2] = 6;
cube[0, 2, 0] = 7;
cube[0, 2, 1] = 8;
cube[0, 2, 2] = 9;
// And so on for the other layers of the cube
Arrays are fundamental data structures in C# and are used in various applications to store and manipulate collections of data. Below are some common applications of arrays in C#:
These are just a few examples of how arrays are applied in C# programming. Arrays are versatile and serve as the basis for many data storage and manipulation tasks in various software applications.
A linked list is a data structure consisting of a sequence of elements, where each element (node) contains a value and a reference (or link) to the next element in the sequence.
Thus we can define that each node contains the following elements:
Next
.Head
.In C#, there are three main types of linked lists:
In a singly linked list, each node contains a value and a reference to the next node in the list. The last node in the list usually has a reference to null, indicating the end of the list.
Singly linked lists are efficient for operations that involve adding or removing elements from the beginning of the list, but less efficient for operations that involve accessing elements by index.
To create a singly-linked list in C#, create a web application with the command: dotnet new web -o PracticingDataStructure
.
Open the project and create a new folder called “Models” and inside it create the classes below:
namespace PracticingDataStructures.Models;
public class Node<T>
{
public T Data { get; set; }
public Node<T> Next { get; set; }
public Node(T data)
{
Date = date;
Next = null;
}
}
namespace PracticingDataStructures.Models;
public class MyLinkedList<T>
{
private Node<T> head;
public void Add(T data)
{
Node<T> newNode = new Node<T>(data);
if (head == null)
{
head = newNode;
}
else
{
Node<T> current = head;
while (current.Next != null)
{
current = current.Next;
}
current.Next = newNode;
}
}
public void Display()
{
Node<T> current = head;
while (current != null)
{
Console.Write(current.Data + " -> ");
current = current.Next;
}
Console.WriteLine("null");
}
}
Then in the Program.cs file, add the code below:
using PracticingDataStructures.Models;
MyLinkedList<int> myList = new MyLinkedList<int>();
myList.Add(1);
myList.Add(2);
myList.Add(3);
Console.WriteLine("Singly Linked List:");
Console.WriteLine("");
myList.Display();
Console.WriteLine("");
Finally, in the terminal, execute the following command to run the application: dotnet run
Thus, you will have the following result, which shows exactly the structure of a linked list.
In the code above we defined a class Node<T>
, which represents a single element in the linked list. It contains a Data
property to store the element’s value and a Next
property to point to the next element.
We define a class MyLinkedList<T>
, which represents the linked list itself and has a head property to point to the first element of the list.
The Add
method in the MyLinkedList<T>
class allows you to add elements to the end of the list, while the Display
method displays the list elements.
In the “Program.cs” file, we create a linked list of integers, add some elements to it, and then display the contents of the linked list.
In a doubly linked list, each node contains two links—the first points to the previous node, and the second points to the next node in the sequence.
The previous pointer of the first node and the next pointer of the last node will point to null, as demonstrated in the image below:
In C#, you can use the LinkedList<T>
class from the System.Collections.Generic
namespace to work with doubly linked lists. Here is an example of how to use the built-in LinkedList<T>
class:
//Doubly linked list
// Create a LinkedList of integers
LinkedList<int> myDoublyList = new LinkedList<int>();
// Add elements to the list
myDoublyList.AddLast(1);
myDoublyList.AddLast(2);
myDoublyList.AddLast(3);
// Display the elements in the list
Console.WriteLine("Doubly LinkedList:");
foreach (int item in myDoublyList)
{
Console.Write(item + " <-> ");
}
Console.WriteLine("null");
In the code above, we create a LinkedList<int>
called myDoublyList
and add elements to it, then the list items are displayed through a for-each loop.
If you run the application you will have the following result in the console:
A circular linked list is a type of linked list in which the last element (node) of the list points back to the first element, forming a closed loop or cycle.
In other words, unlike a traditional linear linked list, where the “next” pointer to the last element is typically set to null, in a circular linked list, the “next” pointer to the last element points to the first element of the list as shown in the image below:
To create a circular linked list, inside the “Models” folder add the class below:
namespace PracticingDataStructures.Models;
public class MyCircularLinkedList<T>
{
private Node<T> head;
private Node<T> tail;
public void Add(T data)
{
Node<T> newNode = new Node<T>(data);
if (head == null)
{
head = newNode;
tail = newNode;
tail.Next = head; // Make it circular
}
else
{
newNode.Next = head;
tail.Next = newNode;
tail = newNode;
}
}
public void Display()
{
if (head == null)
{
Console.WriteLine("Circular Linked List is empty.");
return;
}
Node<T> current = head;
do
{
Console.Write(current.Data + " -> ");
current = current.Next;
} while (current != head);
Console.WriteLine(" (Back to head)");
}
}
And in the Program.cs file add the following code:
// Circular Linked List
var circularLinkedList = new MyCircularLinkedList<int>();
circularLinkedList.Add(1);
circularLinkedList.Add(2);
circularLinkedList.Add(3);
Console.WriteLine("Circular Linked List:");
circularLinkedList.Display();
In the code above we defined a CircularLinkedList<T>
class to represent a circular linked list, where we used the Node<T>
class to represent the list elements.
The CircularLinkedList
class has head and tail properties, which represent the first and last nodes. When adding elements, we make sure to close the loop by pointing the tail Next
back to the head.
The Display
method allows you to display the elements of the circular linked list.
In the Program.cs file, we create a circular linked list of integers, add some elements and display the contents of the list.
So if you run the application, you will have the following result in the console:
A “stack” is an abstract data type that follows the Last-In-First-Out (LIFO) principle and is represented by a collection of elements that has three main operations:
Stacks are commonly used to manage data collections where insertion and removal order is important.
The most recently added item is the first to be removed. Think of it like a stack of dishes: you can only add or remove dishes from the top.
The image below represents the concept of a stack in C#:
Additionally, there is a third operation called Peek (or Top): This operation allows you to view the top element of the stack without removing it.
In the context of C#, we can implement a stack data structure using the System.Collections.Generic.Stack<T>
class, where T
is the type of elements you want to store on the stack. This class provides Push, Pop and Peek methods to perform the stack operations.
In the Program.cs file add the code below:
Stack<int> stack = new Stack<int>();
// Pushing elements onto the stack
stack.Push(1);
stack.Push(2);
stack.Push(3);
// Peeking at the top element without removing it
int topElement = stack.Peek();
Console.WriteLine("Top element: " + topElement);
// Popping elements from the stack
int poppedElement1 = stack.Pop();
int poppedElement2 = stack.Pop();
Console.WriteLine("Popped element 1: " + poppedElement1);
Console.WriteLine("Popped element 2: " + poppedElement2);
// Peek again to see the new top element
topElement = stack.Peek();
Console.WriteLine("Top element after popping: " + topElement);
In the code above, we create a stack of integers using Stack<int>
, then push three integers (1, 2 and 3) onto the stack using the Push method.
Then we use the Peek method to see the top element without removing it. We then use the Pop method to remove and retrieve elements from the stack. After highlighting two elements, we peek again to see the new top element.
If you run the application, you will have the following result in the console:
A queue is a structure that represents a collection of elements, where elements are added to one end (back) and removed from the other end (front).
A queue follows the “first in, first out” (FIFO) principle, which means that the element that has been in the queue the longest is the first to be removed. In other words, it works like a real queue of people waiting in line, where whoever enters the queue first is the first to be served.
A queue has two main operations:
In addition to these fundamental operations, queues often also provide methods for checking whether the queue is empty and for inspecting the element in front without removing it, commonly called “peek” or “front.”
We can use queues in a wide variety of scenarios, including:
In C# we can define queues using the System.Collections.Generic.Queue<T>
class. The code below demonstrates how to implement a queue and how to perform both Enqueue and Dequeue operations.
// Create a new queue of integers
Queue<int> myQueue = new Queue<int>();
// Enqueue elements to the queue
myQueue.Enqueue(1);
myQueue.Enqueue(2);
myQueue.Enqueue(3);
myQueue.Enqueue(4);
myQueue.Enqueue(5);
// Dequeue and process elements in FIFO order
while (myQueue.Count > 0)
{
int item = myQueue.Dequeue();
Console.WriteLine($"Dequeued: {item}");
}
In the code above we declare a Queue<int>
called myQueue
to store integers. Then, we use the Enqueue method to add elements to the end of the queue.
We then use a while loop to repeatedly dequeue elements using the Dequeue method until the queue is empty. The elements removed from the queue are processed in FIFO order.
You can also use other methods provided by the Queue<T>
class, such as Peek to view the front element without removing it and Count to check the number of elements in the queue.
If you run this piece of code you will get the result shown in the image below:
Learning data structures is essential for solving software problems efficiently. In this first part, we check the simplest types of structures in the context of ASP.NET Core where it is not necessary to install any type of external resource as ASP.NET Core has a set of built-in classes that help the developer to deal with structures simplest to the most complex.
In this post we saw four types of data structures: arrays, linked lists, stacks and queues. In the next part, we will implement more complex types of data structures such as trees, heaps and graphs.
]]>The context menu in the Progress Telerik UI for ASP.NET Core Grid is a welcome feature, making it possible for you to customize the user experience and create options that make sense for the data presented.
Context menus are one of the best features that an application can have, for both desktop and web, because they allow you to add extra functions and options to the user directly from the data presented on the screen, adding value to your product.
Telerik UI for ASP.NET Core is a robust UI component library recognized for its exceptional quality and performance. The Data Grid is one of the suite’s standout features.
The Grid component demonstrates Progress Telerik’s commitment to continuous evolution and innovation. It results from careful market observation and a keen ear for customer feedback. This dedication to continuous improvement ensures that the Telerik tools are always at the cutting edge of technology, meeting the needs of both developers and end users.
Let’s check it out!
Start creating your project type as Telerik UI for ASP.NET Core after installing Telerik using the Progress Control Panel app, or from the VS extension:
Choose a path for your project:
Choose the technology that fits better for your project: HTML or Tag Helpers. For this sample, I picked the Grid and Menu template.
Choose a theme for your project.
Confirm the next screen, and your project will look like this:
The ContextMenu option in the Grid component exposes many sophisticated features that enable developers to interact with grid material more effectively and intuitively.
Here is a description of some of the Grid’s ContextMenu features and functionalities:
Sorting: From the header context menu, users can sort the grid columns straight from the context menu, improving the user experience.
Exporting: It supports exporting grid data to multiple formats such as PDF, Excel and others, allowing data sharing and reporting.
Select: This feature allows you to choose individual rows or cells within the grid.
Edit: Editing the grid content directly is now possible, easing user interaction for data updates.
Copy selection: Users can copy the selected cells or rows to make it easier to use the data elsewhere.
Copy selection with no headers: It allows you to copy the selected data without the headers, giving you more flexibility in data consumption.
Reorder row: The reordering tool allows users to rearrange the rows based on their choices or needs.
Conditional actions: Developers can set the context menu to display different options depending on the data in the grid. Specific menu options, for example, can be revealed or hidden based on the values in the rows or cells.
Dynamic Menu Options: The context menu’s options can be dynamically adjusted, allowing for a more responsive and intelligent interface that reacts to the grid’s data.
Custom Menu Items: Developers can add custom menu items that trigger specific actions, extending the grid’s functionality based on project requirements.
On the GridController, I changed the code to return my desired data for this demonstration:
1.using Kendo.Mvc.Extensions;
2.using Kendo.Mvc.UI;
3.using Microsoft.AspNetCore.Mvc;
4.using TelerikAspNetCoreApp1.Models;
5.
6.namespace TelerikAspNetCoreApp1.Controllers;
7.public class GridController : Controller
8.{
9. public ActionResult Orders_Read([DataSourceRequest] DataSourceRequest request)
10. {
11. var result = Enumerable.Range(2, 51).Select(i => new OrderViewModel
12. {
13. OrderID = i,
14. Freight = i * 10,
15. OrderDate = new DateTime(2023, 9, 15).AddDays(i % 7),
16. ShipName = "ShipName " + i,
17. ShipCity = "ShipCity " + i
18. });
19.
20. var customRet = result.ToList();
21.
22. customRet.Insert(0, new OrderViewModel
23. {
24. OrderID = 1,
25. Freight = 1 * 10,
26. OrderDate = new DateTime(2023, 9, 15).AddDays(1 % 7),
27. ShipName = "Brazil",
28. ShipCity = "Porto Alegre"
29. });
30.
31.
32. var dsResult = customRet.ToDataSourceResult(request);
33. return Json(dsResult);
34. }
35.}
Now, the first row has specific data that will be monitored by the option in the context menu.
On Index.cshtml, I added two notifications:
1. @(Html.Kendo().Notification()
2. .Name("notification")
3. .Position(p => p.Pinned(true).Top(60).Left(30))
4. .AutoHideAfter(3000)
5. )
6.
7. @(Html.Kendo().Notification()
8. .Name("notificationOk")
9. .Position(p => p.Pinned(true).Top(30).Left(30))
10. .AutoHideAfter(2500)
11. )
Add the method ContextMenu to the Grid component:
1.@(Html.Kendo().Grid<TelerikAspNetCoreApp1.Models.OrderViewModel>()
2. .Name("grid")
3. .ContextMenu(
Add the Head method with the options your requirements demand:
1.@(Html.Kendo().Grid<TelerikAspNetCoreApp1.Models.OrderViewModel>()
2. .Name("grid")
3. .ContextMenu(menu => menu
4. .Head(head =>
5. {
6. head.Create();
7. head.Separator();
8. head.SortAsc();
9. head.SortDesc();
10. head.Separator();
11. head.ExportPDF().Text("Generate Pdf File").Icon("file");
12. head.ExportExcel();
13. })
In this sample, I added a custom text to ExportPDF()
.
Add the Body method with the options your requirements demand:
1. @(Html.Kendo().Grid<TelerikAspNetCoreApp1.Models.OrderViewModel>()
2. .Name("grid")
3. .ContextMenu(menu => menu
4. .Head(head =>
5. {
6. head.Create();
7. head.Separator();
8. head.SortAsc();
9. head.SortDesc();
10. head.Separator();
11. head.ExportPDF().Text("Generate Pdf File").Icon("file");
12. head.ExportExcel();
13. })
14. .Body(body =>
15. {
16. body.Edit();
17. body.Destroy();
18. body.Separator();
19. body.Select();
20. body.CopySelection();
21. body.CopySelectionNoHeaders();
22. body.Separator();
23. body.ReorderRow();
24. body.Custom("myTool").Text("Check status").Icon("gear");
25.
26. })
27. )
I added a custom command to “Check status” on the Body method. Pay attention to the custom name “myTool.”
On the Index.cshtml I added the script:
1.<script>
2.
3. kendo.ui.grid.commands["myToolCommand"] = kendo.ui.grid.GridCommand.extend({
4. exec: function () {
5.
6. var selectedItems = $("#grid").data("kendoGrid").selectedKeyNames();
7. var actualItems = [];
8.
9. if (selectedItems.length > 0) {
10. selectedItems.forEach(function (key) {
11.
12. var item = grid.dataSource.get(key);
13. if (item.ShipCity == "Porto Alegre") {
14. var popupNotification = $("#notification").data("kendoNotification");
15. popupNotification.show(`This city ’${item.ShipCity}’ is temporary blocked!`, "error")
16. }
17. else {
18. actualItems.push(item);
19. }
20. });
21. }
22. if (actualItems.length > 0) {
23. var popupNotification = $("#notificationOk").data("kendoNotification");
24. popupNotification.show(`’${actualItems.length}’ cities available!`, "info")
25. }
26. }
27. });
28.
29. </script>
The name “myTool” becomes a command by adding “Command” to the name in the grid commands:
1.kendo.ui.grid.commands["myToolCommand"] = kendo.ui.grid.GridCommand.extend({
2....
The image below shows the working demonstration:
Now, when the “Check status” is clicked, it will execute this validation below:
1.var selectedItems = $("#grid").data("kendoGrid").selectedKeyNames();
2.var actualItems = [];
3.
4.if (selectedItems.length > 0) {
5. selectedItems.forEach(function (key) {
6.
7. var item = grid.dataSource.get(key);
8. if (item.ShipCity == "Porto Alegre") {
9. var popupNotification = $("#notification").data("kendoNotification");
10. popupNotification.show(`This city ’${item.ShipCity}’ is temporary blocked!`, "error")
11. }
12. else {
13. actualItems.push(item);
14. }
15. });
16.}
17.if (actualItems.length > 0) {
18. var popupNotification = $("#notificationOk").data("kendoNotification");
19. popupNotification.show(`’${actualItems.length}’ cities available!`, "info")
20.}
The line below receives all selected key values:
1.var selectedItems = $("#grid").data("kendoGrid").selectedKeyNames();
The line below gets the data record by the key:
1. var item = grid.dataSource.get(key);
This line below analyzes the property/column ShipCity from the Model:
1. if (item.ShipCity == "Porto Alegre")
And the notification will be displayed with:
1. if (actualItems.length > 0) {
2. var popupNotification = $("#notificationOk").data("kendoNotification");
3. popupNotification.show(`’${actualItems.length}’ cities available!`, "info")
4. }
There are extra configurations you need to do to on this sample to it works:
Add the Model method and define the unique Id for the grid, using .Model(model => model.Id(p => p.**OrderID**))
:
1..DataSource(dataSource => dataSource
2. .Ajax()
3. .Model(model => model.Id(p => p.OrderID))
4. .PageSize(20)
5. .Read(read => read.Action("Orders_Read", "Grid"))
6. )
7.)
On the OrderViewModel
, define the [Key]
for OrderID
:
1. [Key]
2. public int OrderID
3. {
4. get;
5. set;
6. }
And that’s it. It should be running!
Here is the complete definition of the Grid component for your convenience:
1.@(Html.Kendo().Grid<TelerikAspNetCoreApp1.Models.OrderViewModel>()
2. .Name("grid")
3. .ContextMenu(menu => menu
4. .Head(head =>
5. {
6. head.Create();
7. head.Separator();
8. head.SortAsc();
9. head.SortDesc();
10. head.Separator();
11. head.ExportPDF().Text("Generate Pdf File").Icon("file");
12. head.ExportExcel();
13. })
14. .Body(body =>
15. {
16. body.Edit();
17. body.Destroy();
18. body.Separator();
19. body.Select();
20. body.CopySelection();
21. body.CopySelectionNoHeaders();
22. body.Separator();
23. body.ReorderRow();
24. body.Custom("myTool").Text("Check status").Icon("gear");
25.
26. })
27. )
28. .Columns(columns =>
29. {
30. columns.Bound(p => p.OrderID).Filterable(false);
31. columns.Bound(p => p.Freight);
32. columns.Bound(p => p.OrderDate).Format("{0:MM/dd/yyyy}");
33. columns.Bound(p => p.ShipName);
34. columns.Bound(p => p.ShipCity);
35. })
36. .Selectable(selectable => selectable
37. .Mode(GridSelectionMode.Multiple))
38. .Pageable()
39. .Sortable()
40. .Scrollable()
41. .Groupable()
42. .Filterable()
43. .HtmlAttributes(new { style = "height:550px;" })
44. .DataSource(dataSource => dataSource
45. .Ajax()
46. .Model(model => model.Id(p => p.OrderID))
47. .PageSize(20)
48. .Read(read => read.Action("Orders_Read", "Grid"))
49. )
50. )
You can access this working project sample at my GitHub.
Including the ContextMenu in Telerik UI for ASP.NET Core’s DataGrid is valuable and practical. This feature represents a significant leap in the customization and manipulation of data within the grid, allowing for a more nuanced and user-specific interaction with the data shown. It ensures that users may accomplish jobs with greater efficiency and productivity by providing a profusion of alternatives and actions that can be instantly applied through the context menu.
In summary, the ContextMenu is a powerful ally, assisting users in quickly navigating, managing and modifying data, significantly contributing to improved user experiences and operational fluency in data handling within web applications.
Start creating data solutions that are worth using immediately by registering for a free trial. Plus, even during your free trial, you’ll receive unparalleled support from the industry-leading Progress Telerik team.
]]>Design patterns help developers solve common problems when building applications and features. In the context of ASP.NET Core, it is essential to know design patterns—after all, the ASP.NET ecosystem itself is based on many of these patterns, such as the MVC pattern.
In this post, we will learn about some of the main patterns and implement one of them in an ASP.NET Core application, so at the end of the post, you will be familiar with design patterns and can apply them whenever there is an opportunity.
Design patterns are reusable, proven solutions to common problems that arise during software design and development.
They are not specific to a specific programming language or technology, but rather provide general guidelines and templates that aim to achieve various software design objectives such as flexibility, maintainability and scalability.
Design patterns help developers create more efficient, organized and maintainable code by encapsulating best practices and promoting code reuse.
These design patterns are not rigid models that must be followed at all costs, but rather principles and guidelines that can be adapted and customized to meet the specific needs of a software project. Proper use of design patterns can lead to more extensible and efficient software systems.
There are several categories of design patterns, among which four stand out:
These patterns deal with mechanisms for creating objects in a way that is appropriate to the situation. They often involve the use of builders, factories and prototypes.
Among the creational patterns are:
These patterns deal with the composition of objects, generally defining their relationships to form larger structures. They help design a flexible and efficient system.
Among the structural patterns are:
These patterns address interaction and communication between objects, focusing on how objects distribute responsibilities and collaborate with each other.
Among the behavioral patterns are:
These high-level patterns provide a model for a software application’s overall structure and organization. They guide the architectural design of entire systems or subsystems.
Among the architectural standards are:
The abstract factory design pattern in the context of ASP.NET Core is an approach used to create families of related or dependent objects in a flexible and extensible way. It is especially useful when you need to ensure that a set of objects are compatible and consistent, but want to maintain the flexibility to swap these object families easily.
In this post, we will create an example where we need to calculate a student’s payment amount, where, depending on the due date, a discount or an extra fee will be applied.
In this example, we will have an abstract class that will have a method called “CalculateFeeAmount” and two derived classes that will be responsible for applying the discount or extra fee.
Each design pattern contains some degree of complexity, so this post will focus on a single design pattern. We will explore its meaning and how to implement it in practice in a real-world application in ASP.NET Core, so let’s get started!
In this post, we will create an ASP.NET Core minimal API that is responsible for registering student fees.
Below are three prerequisites to implement the application:
To create the base application via terminal, use the following command:
dotnet new web -o StudentFeesTracker
You can access the complete source code here: Student Fees Tracker source code.
Now let’s install the NuGet packages that we will need later in the project. In the terminal, execute the following commands:
dotnet add package Swashbuckle.AspNetCore
dotnet add package Dapper --version 2.0.151
dotnet add package MySql.Data --version 8.1.0
To create the entity classes, go inside the project and create a folder called “Models.” Inside it, create the following classes:
namespace StudentFeesTracker.Models;
public class EntityDto
{
public Guid Id { get; set; }
}
namespace StudentFeesTracker.Models;
public class StudentFee : EntityDto
{
public Guid StudentId { get; set; }
public decimal Amount { get; set; }
public DateTime DueDate { get; set; }
public bool IsPaid { get; set; }
}
namespace StudentFeesTracker.Models;
public class ConnectionString
{
public string? ProjectConnection { get; set; }
}
In ASP.NET Core and software development in general, a repository is a design technique used to separate the logic that retrieves and stores data from the rest of the application, which helps improve the maintainability, testability and scalability of the database.
Create a new folder called “Repositories,” and inside it create the class and interface below:
using StudentFeesTracker.Models;
namespace StudentFeesTracker.Repositories;
public interface IStudentFeeRepository
{
Task<List<StudentFee>> FindAll();
Task Create(StudentFee studentFee);
}
using System.Data;
using Dapper;
using Microsoft.Extensions.Options;
using MySql.Data.MySqlClient;
using StudentFeesTracker.Models;
namespace StudentFeesTracker.Repositories;
public class StudentFeeRepository : IStudentFeeRepository
{
private readonly IDbConnection _dbConnection;
public StudentFeeRepository(IOptions<ConnectionString> connectionString)
{
_dbConnection = new MySqlConnection(connectionString.Value.ProjectConnection);
}
public async Task<List<StudentFee>> FindAll()
{
string query = @"select
id Id,
student_id StudentId,
amount Amount,
due_date DueDate,
is_paid IsPaid
from student_fees";
var projects = await _dbConnection.QueryAsync<StudentFee>(query);
return projects.ToList();
}
public async Task Create(StudentFee studentFee)
{
string query = @"insert into student_fees(id, student_id, amount, due_date, is_paid)
values(@Id, @StudentId, @Amount, @DueDate, @IsPaid)";
await _dbConnection.ExecuteAsync(query, studentFee);
}
}
Below is the SQL code needed to create the database and table used in the post example:
-- Create the database
CREATE DATABASE student_fee_management;
-- Switch to the newly created database
USE student_fee_management;
-- Create the table to store student fee information
CREATE TABLE student_fees (
id CHAR(36) PRIMARY KEY,
student_id CHAR(36),
amount DECIMAL(10, 2),
due_date DATETIME,
is_paid BOOLEAN
);
Note that the code above has a method for inserting records into the database, but imagine that before inserting them we must calculate the discount or extra fee on the fee amount, depending on the due date.
To follow a good practice pattern, we can use the abstract factory pattern to deal with this. To do this, we can create a base factory class that will contain a method called “CalculateAmount()” and two derived classes, one to calculate the discount and the other to calculate the extra late fee.
In the root of the project, create a new folder called “Factories” and inside it create the classes below:
namespace StudentFeesTracker.Factories;
public abstract class StudentFeeFactory
{
public abstract decimal CalculateFeeAmount(decimal amount);
}
namespace StudentFeesTracker.Factories;
public class DiscountStudentFeeFactory : StudentFeeFactory
{
public override decimal CalculateFeeAmount(decimal amount)
{
return amount * 0.9m; // Apply a 10% discount
}
}
namespace StudentFeesTracker.Factories;
public class LateStudentFeeFactory : StudentFeeFactory
{
public override decimal CalculateFeeAmount(decimal amount)
{
return amount * 1.1m; // Apply a 10% late fee
}
}
Note that the StudentFeeFactory class is defining a method (CalculateFeeAmount()
) which is implemented by the two factory classes LateStudentFeeFactory
and DiscountStudentFeeFactory
.
It is important to note that the CalculateFeeAmount()
method only defines input and output values—it does not define behavior. Therefore, each derived class can handle the rate calculation independently.
In this scenario, we use a simple calculation of discount or addition of fees that could be created in the project’s service class itself, but imagine if there were more business rules in the calculation, such as communication with external APIs. It would be difficult to keep everything in a single class. That’s why it’s very important to separate the logic into distinct classes, even if the actual calculation is simple. This way you’re preparing the system so that it can be scaled if necessary, making code maintenance easier.
The service class will be used to contain the application’s business rules and to access the repository’s methods. It is in this class that the dependencies on the previously created factory classes will be injected.
So, create a new folder called “Services” and inside it create the following class:
using StudentFeesTracker.Factories;
using StudentFeesTracker.Models;
using StudentFeesTracker.Repositories;
namespace StudentFeesTracker.Services;
public class StudentFeeService
{
private readonly IStudentFeeRepository _repository;
private readonly StudentFeeFactory _lateStudentFeeFactory;
private readonly StudentFeeFactory _discountStudentFeeFactory;
public StudentFeeService(IStudentFeeRepository repository, LateStudentFeeFactory lateStudentFeeFactory, DiscountStudentFeeFactory discountStudentFeeFactory)
{
_repository = repository;
_lateStudentFeeFactory = lateStudentFeeFactory;
_discountStudentFeeFactory = discountStudentFeeFactory;
}
public async Task<List<StudentFee>> FindAll()
{
var studentFees = await _repository.FindAll();
return studentFees;
}
public async Task<Guid> Create(StudentFee studentFee)
{
CalculateFee(studentFee);
GenerateId(studentFee);
await _repository.Create(studentFee);
return studentFee.Id;
}
private void CalculateFee(StudentFee studentFee)
{
if (IsLate(studentFee.DueDate))
{
studentFee.Amount = _lateStudentFeeFactory.CalculateFeeAmount(studentFee.Amount);
}
else if (IsDiscount(studentFee.DueDate))
{
studentFee.Amount = _discountStudentFeeFactory.CalculateFeeAmount(studentFee.Amount);
}
}
private bool IsLate(DateTime dueDate)
{
const int lateDayThreshold = 10;
return dueDate.Day > lateDayThreshold;
}
private bool IsDiscount(DateTime dueDate)
{
const int discountDayThreshold = 5;
return dueDate.Day < discountDayThreshold;
}
private void GenerateId(StudentFee studentFee)
{
studentFee.Id = Guid.NewGuid();
}
}
Note that in the code above, both factory classes are being passed via the constructor, and their methods are being used to calculate the discount or late fee depending on the date entered. This way, if we had more methods or factory classes, they would be ready to be used by the service class, making the application modular and flexible.
For the application to be functional, it is necessary to create the API endpoints and inject dependencies of the service, repository and factory classes, in addition to making the connection string with the database.
Replace the code in the “Program.cs” file with the following code:
using StudentFeesTracker.Factories;
using StudentFeesTracker.Models;
using StudentFeesTracker.Repositories;
using StudentFeesTracker.Services;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddTransient<IStudentFeeRepository, StudentFeeRepository>();
builder.Services.AddSingleton<StudentFeeService>();
builder.Services.Configure<ConnectionString>(builder.Configuration.GetSection("ConnectionStrings"));
builder.Services.AddTransient<LateStudentFeeFactory>();
builder.Services.AddTransient<DiscountStudentFeeFactory>();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.MapGet("/v1/student/fees", async (StudentFeeService service) =>
{
var fees = await service.FindAll();
return fees.Any() ? Results.Ok(fees) : Results.NotFound("No fees found");
})
.WithName("FindAllFees");
app.MapPost("/v1/student/fees", async (StudentFeeService service, StudentFee newStudentFee) =>
{
var createdId = await service.Create(newStudentFee);
return Results.Created($"/v1/contacts/{createdId}", createdId);
})
.WithName("CreateNewFee");
app.Run();
And in the “appsettings.json” file, add the code snippet below, replacing the capitalized keywords with your local MySQL settings.
"ConnectionStrings": {
"ProjectConnection": "host=localhost; port=PORT; database=student_fee_management; user=USER; password=PASSWORD;"
},
Once this is done, we can run the application and test its functions. To do this, simply run the command in the terminal: dotnet run
and in the browser access the address http://localhost:5168/swagger/index.html
.
Then you will see the Swagger page and can perform the defined operations, as shown in the GIF below:
Design patterns are extremely useful, as in addition to solving many problems they also help developers and designers create robust applications, prepared to be scaled using good practices.
Something important to say is that design patterns should be implemented when there is a problem to be solved, not used just to use them no matter the cost.
In this post, we saw a very well-known pattern, abstract factory, which is normally used to decouple components in an application, as it encourages the creation of abstract interfaces for the creation of logic and business rules.
So always consider using design patterns when creating a new application or refactoring an existing application.
]]>Understanding SOLID principles and applying them is a decisive factor for any developer who wants to advance their career. After all, SOLID allows the creation of more sustainable and flexible object-oriented systems and this is undoubtedly a requirement found in the vast majority of job market opportunities.
Throughout the post, we will cover the principles that make up SOLID and see how to apply them in practice when building ASP.NET Core applications.
SOLID is an acronym created by Robert C. Martin that represents the set of object-oriented design principles that aim to improve maintainability, extensibility and understanding of source code.
Each letter of SOLID represents a principle:
S - Single Responsibility Principle: This principle emphasizes that a class should have only a single responsibility in the system. This results in more cohesive classes and is easier to maintain.
O - Open/Closed Principle: This principle states that software entities, such as classes and modules, should be open for extension, but closed for modification. This means you can add new behaviors or functionality without changing existing code.
L - Liskov Substitution Principle: This principle emphasizes that derived classes (subclasses) must be replaceable by base classes (superclasses) without affecting program correctness. This promotes code consistency and interoperability.
I - Interface Segregation Principle: This principle suggests that interfaces should not be too comprehensive, but specific to the clients that use them. This prevents classes from implementing methods they don’t need, reducing coupling and improving cohesion.
D - Dependency Inversion Principle: This principle proposes that high-level modules should not depend directly on low-level modules, but both should depend on abstractions. Furthermore, details should depend on abstractions, not the other way around, which promotes a more flexible and easily adaptable architecture.
Together, these SOLID principles provide guidelines for creating more robust, flexible and maintainable code, helping to build high-quality, scalable software systems.
ASP.NET Core uses C# as a programming language, an object-oriented language which allows developers to create modular and decoupled applications.
However, object orientation can become a problem if used without taking into account good programming practices. That’s why we need SOLID—it gives us five principles that help us to predict future problems in a software project through the creation of flexible and maintainable code.
To practice SOLID in an ASP.NET Core application, let’s create a simple minimal API to save some data to a database.
Throughout the post, we will see each of the principles and how we can use them in the application.
You can check the source code of the complete project here: ContactHub - GitHub source code.
To create the application using the minimal API template, use the command below:
dotnet new web -o ContactHub
To practice SOLID principles, let’s first create some classes and configurations to prepare the application.
In the example of the post, we are going to use the SQLite database, which is a simple database, normally used in mobile applications, and which is stored in the root of the application.
We are also going to install EF Core, which is an ORM used to work with databases. EF Core is very useful because it has several functions that facilitate the construction of CRUD operations in the database.
So to download the SQLite and EF Core dependencies into the project, use the commands below:
dotnet add package Microsoft.EntityFrameworkCore --version 7.0.10
dotnet add package Microsoft.EntityFrameworkCore.Design --version 7.0.10
dotnet add package Microsoft.EntityFrameworkCore.Sqlite --version 7.0.10
Then, in the root of the project create a new class called “Models” and inside it, create the classes below:
namespace Models;
public class Contact : EntityDto
{
public string FullName { get; set; }
public string PhoneNumber { get; set; }
public string EmailAddress { get; set; }
public string Address { get; set; }
public bool IsDeleted { get; set; }
public DateTimeOffset CreatedOn { get; set; }
}
namespace Models;
public class EntityDto
{
public Guid Id { get; set; }
};
Now, let’s create the context class to configure the database. In the root of the project, create a new folder called “Data”. Inside it, create a new class called “ContactDbContext” and put the code below in it:
using Microsoft.EntityFrameworkCore;
using Models;
namespace Data;
public class ContactDBContext : DbContext
{
public DbSet<Contact> Contacts { get; set; }
protected override void OnConfiguring(DbContextOptionsBuilder options) =>
options.UseSqlite("DataSource=contacts_db.db;Cache=Shared");
}
Once that’s done, we can start implementing the SOLID principles.
The first principle of SOLID (Single Responsibility) says that a class must have only one function—that is, it must not contain more than one operation. In the context of ASP.NET Core, we can extend this logic to methods and functions. For example, a data insertion method should not contain validation logic, it should just insert the data.
So, in the folder “Data” create a new class called “ContactRepository” and put the code below in it:
namespace Data;
using Microsoft.EntityFrameworkCore;
using Models;
public class ContactRepository
{
private readonly ContactDBContext _db;
public ContactRepository(ContactDBContext db)
{
_db = db;
}
public async Task<List<Contact>> FindAllContactsAsync()
{
return await _db.Contacts.ToListAsync();
}
public async Task<Contact> FindContactByIdAsync(Guid id)
{
var contact = await _db.Contacts.SingleOrDefaultAsync(c => c.Id == id);
return contact;
}
public async Task<Guid> InsertAsync(Contact contact)
{
contact.Id = Guid.NewGuid();
contact.CreatedOn = DateTime.Now;
await _db.AddAsync(contact);
await _db.SaveChangesAsync();
return contact.Id;
}
public async Task UpdateAsync(Contact contact, Contact existingContact)
{
existingContact.FullName = contact.FullName;
existingContact.PhoneNumber = contact.PhoneNumber;
existingContact.EmailAddress = contact.EmailAddress;
existingContact.Address = contact.Address;
existingContact.IsDeleted = contact.IsDeleted;
await _db.SaveChangesAsync();
}
public async Task DeleteAsync(Contact contact)
{
_db.Remove(contact);
await _db.SaveChangesAsync();
}
}
Note that the code of the ContactRepository
class implements the Single Responsibility Principle because it has a single well-defined responsibility, which is to deal with data persistence operations, providing methods
to fetch, insert, update and delete contacts in the database.
We can also identify other aspects of SOLID present in the ContactRepository
class:
ContactRepository
class does not mix the business logic of the contacts with the interaction with the database. It just focuses on database operations without worrying about
the business logic of contacts.ContactRepository
class is easier to maintain and understand. Changes to database operations related to contacts can be made
in this class without affecting other parts of the code.In the “Open/Closed” principle, classes and modules should be open for extension, but closed for modification—that is, you can add new behaviors or functionalities without changing the existing code.
Thinking about the example in the post, imagine that instead of the Contact
class having a property called FullName
, it should now have two properties, namely First Name
and Last Name
.
To respect the Open/Closed principle, instead of modifying the Contact class and deleting the FullName
property, let’s just add the new fields to it.
In the Contact
class just add the following properties above the FullName
property:
public string Name { get; set; }
public string LastName { get; set; }
This way, if any system module uses the FullName
property, it will not break, as the property still exists, even if it is not so important in other contexts.
By using the Open/Closed principle, we ensure that the system is stable by adding properties and behaviors instead of modifying them.
In the principle of Liskov Substitution, objects of a derived class must be able to be treated as objects of the base class without problems.
For example, if you have a base class Contact
and a derived class PersonalContact
, you should be able to treat a PersonalContact
object as a Contact
without breaking the program logic, i.e., the PersonalContact
class it should just extend the Contact
class without overwriting the behavior of the base Contact
class. Methods in the derived class must, at a minimum, maintain the same contract and functionality as the base class.
Following the Liskov Substitution Principle helps ensure that your class hierarchies are well-designed and that object substitutions do not introduce subtle errors into the code. This contributes to the maintainability and extensibility of the software.
To practice the Liskov Substitution Principle in the post example, let’s implement the PersonalContact
class and see how the Contact
base class can replace it.
So, in the “Models” folder, create the class below:
namespace Models;
public class PersonalContact : Contact
{
public string Nickname { get; set; }
}
Now to create the application’s business rules and include methods that will use the repository class created earlier, create a new folder called Services
and inside it create a new class called ContactService
and place the code below in it:
using Data;
using Models;
namespace Services;
public class ContactService
{
private readonly ContactRepository _repository;
public ContactService(ContactRepository repository)
{
_repository = repository;
}
public string GetPersonalContactFullName(Contact contact)
{
if (contact is PersonalContact personalContact)
{
string fullName = $"{contact.FullName} {personalContact.Nickname}";
return fullName;
}
return contact.FullName;
}
}
Note that in the method GetPersonalContactFullName()
a comparison is being made to find out if the object Contact
is of type PersonalContact
. This is only
possible because PersonalContact
is a subclass of Contact
and does not violate the principle of Liskov replacement because it just extends the base class. In other words, it can be
replaced, or in this case compared to the base class Contact
.
In the principle of Interface Segregation, we must be careful to create interfaces that are not very comprehensive—that is, they must be specific to each client who will use them, thus avoiding the creation of unnecessary methods.
So, to practice the segregation of interfaces in the project, we will create an interface for the Repository
class and another interface for the Service
class. Inside the “Data”
folder, create a new interface with the name “IContactRepository” and place the code below in it :
using Models;
namespace Data
{
public interface IContactRepository
{
Task<List<Contact>> FindAllContactsAsync();
Task<Contact> FindContactByIdAsync(Guid id);
Task<Guid> InsertAsync(Contact contact);
Task UpdateAsync(Contact contact, Contact existingContact);
Task DeleteAsync(Contact contact);
}
}
Then, inside the “Services” folder, create a new interface called "IContactService " and place the code below in it:
using Models;
namespace Services
{
public interface IContactService
{
Task<List<Contact>> FindAllContactsAsync();
string GetPersonalContactFullName(Contact contact);
Task<Guid> CreateContactAsync(Contact contact);
Task UpdateContactAsync(Guid id, Contact updatedContact);
Task DeleteContactAsync(Guid id);
}
}
Note that both interfaces are very similar—after all, the Service
class accesses the methods of the Repository
class, but there is a method (GetPersonalContactFullName()
)
that is only present in the service class. So if we used the same interface for both classes, the GetPersonalContactFullName()
method would not be useful in the Repository class, unnecessarily causing the use of duplicate
code in addition to increasing coupling between classes and violating the principle of Interface Segregation.
Therefore, when creating interfaces, always prefer to create specific interfaces, rather than generic interfaces.
In the principle of Dependency Inversion, we must focus on the importance of reducing coupling between system modules.
To maintain loose coupling, high-level modules should not depend on low-level modules. Both must depend on abstractions.
In ASP.NET Core, a well-known way to practice this principle is through dependency injection (DI). Dependency injection is a design pattern and programming concept where dependencies external to an object are injected into it, rather than the object creating those dependencies on its own.
ASP.NET Core has a system of built-in dependency injection that allows you to register and inject dependencies into your classes through the ConfigureServices of the Program class in minimal APIs and the Startup class in older versions of ASP.NET Core.
To implement Dependency Inversion in the project, just add the following code to the Program.cs file:
builder.Services.AddDbContext<ContactDBContext>();
builder.Services.AddScoped<IContactRepository, ContactRepository>();
builder.Services.AddScoped<IContactService, ContactService>();
Then, replace the code in the “ContactService” with the code below:
using Data;
using Models;
namespace Services;
public class ContactService : IContactService
{
private readonly IContactRepository _repository;
public ContactService(IContactRepository repository)
{
_repository = repository;
}
public async Task<List<Contact>> FindAllContactsAsync()
{
return await _repository.FindAllContactsAsync();
}
public async Task<Contact> FindContactByIdAsync(Guid id)
{
var contact = await _repository.FindContactByIdAsync(id);
return contact;
}
public async Task<Guid> CreateContactAsync(Contact contact)
{
return await _repository.InsertAsync(contact);
}
public async Task UpdateContactAsync(Guid id, Contact updatedContact)
{
var existingContact = await _repository.FindContactByIdAsync(id);
if (existingContact != null)
{
await _repository.UpdateAsync(updatedContact, existingContact);
}
}
public async Task DeleteContactAsync(Guid id)
{
var existingContact = await _repository.FindContactByIdAsync(id);
if (existingContact != null)
{
await _repository.DeleteAsync(existingContact);
}
}
public string GetPersonalContactFullName(Contact contact)
{
if (contact is PersonalContact personalContact)
{
string fullName = $"{contact.FullName} {personalContact.Nickname}";
return fullName;
}
return contact.FullName;
}
}
In the code above, we are defining the dependency injection configuration of the ContactDBContext
, ContactRepository
and ContactService
classes.
Note that the ContactRepository
class is being passed to the AddScoped
method along with its interface. The AddScoped
method is one of the ways to implement
dependency injection in ASP.NET Core. This way, every time the IContactService
interface is invoked, a new instance of the ContactService
class will be available for use.
In the ContactService
class, we did dependency injection through the class constructor, passing the IContactRepository
interface.
This way we implemented the principle of Dependency Inversion. Another way to do dependency injection would be to create a new instance of the ContactRepository
class directly in the service class through the “new”
operator, but this is an extremely wrong practice and should not be used, as it completely deviates from the principle of Dependency Inversion.
To make the API functional, we need to generate the database and tables. For this, we will use EF Core commands.
First, make sure you have EF Core installed globally via the command:
dotnet ef
If you have it, you should see something like this in the terminal:
If any error means you need to install it, you can use the command below to install EF Core globally:
dotnet tool install --global dotnet-ef
Open a terminal in the application, and run the commands below to create the database scripts and run them through EF Core’s Migrations feature:
dotnet ef migrations add InitialModel
dotnet ef database update
After executing the EF commands, the database is ready to be used.
We also need to add the API endpoints. So, in the Program.cs file, just below where the “app” variable is created, add the code below:
app.MapGet("v1/contacts", async (IContactService service) =>
{
var allContacts = await service.FindAllContactsAsync();
return allContacts.Any() ? Results.Ok(allContacts) : Results.NotFound();
}).Produces<Contact>();
app.MapGet("v1/contacts/{id}", async (IContactService service, Guid id) =>
{
var existingContact = await service.FindContactByIdAsync(id);
return existingContact is not null ? Results.Ok(existingContact) : Results.NotFound();
}).Produces<Contact>();
app.MapPost("v1/contacts", async (IContactService service, Contact contact) =>
{
var createdId = await service.CreateContactAsync(contact);
return Results.Created($"/v1/contacts/{createdId}", createdId);
}).Produces<Contact>();
app.MapPut("v1/contacts", async (IContactService service, Contact contact) =>
{
var existingContact = await service.FindContactByIdAsync(contact.Id);
if (existingContact is null)
return Results.NotFound();
await service.UpdateContactAsync(contact.Id, existingContact);
return Results.Ok("Contact updated successfully");
});
app.MapDelete("v1/contacts/{id}", async (IContactService service, Guid id) =>
{
var existingContact = await service.FindContactByIdAsync(id);
if (existingContact is null)
return Results.NotFound();
await service.DeleteContactAsync(id);DD
return Results.NoContent();
});
This way the API is functional and available to perform CRUD operations. As the purpose of the post is to demonstrate the implementation of SOLID, the execution of CRUD operations will not be covered, but feel free to perform them.
SOLID is a well-known paradigm in the world of systems development and learning its principles is essential for any developer who wants to create scalable applications, with low coupling and a modular design.
In this post, we learned what each of the principles means and how to implement them in an ASP.NET Core application.
Something important to note is that SOLID goes far beyond the examples demonstrated in the post, so feel free to explore more examples and consolidate your knowledge in this paradigm that has become a requirement in many places.
]]>Object orientation is vital in developing robust and scalable applications in ASP.NET Core. By adopting object-oriented principles, developers can structure their code to be modular and reusable, promoting greater cohesion and less coupling between components, resulting in a more flexible and maintainable architecture.
This post will discuss the four pillars of object orientation and demonstrate how to implement each in an ASP.NET Core application. That way, you can safely develop an object-oriented application and understand each concept of this paradigm.
Object-oriented programming (OOP) is a paradigm that emerged in the 1960s, but became popular mainly in the ’90s and became the basis for the creation of several programming languages such as C#, Java, C++, Python, Lua and PHP, among others. OOP teaches a special way of writing computer programs, where real-world ideas are abstracted into software through blocks called “objects.”
An object is an abstraction of some real-world event or entity, with attributes that represent its characteristics or properties, and methods that emulate its behavior.
Think of it like LEGO toys. Each LEGO brick is like an object and you can fit these pieces together to build bigger and more complex things. Likewise, in OOP, you create objects that have their own characteristics (attributes) and actions they can do (methods).
For example, if you are writing a program about library books, you could create an object called “Book.” This object would have information about the book (attributes), such as title, author and year of publication, as well as actions related to it (methods), such as loan, return and late fees.
The big idea of OOP is to break a complex program into smaller, more manageable parts. This makes programming more organized and helps with code reuse, like using different LEGO bricks to build multiple creations.
Looking at it from an imaginary angle, we can say that object-oriented programming is like playing with virtual LEGO pieces on the computer, where you create objects with specific characteristics and actions to build more efficient and flexible programs with reusable parts.
These four basic concepts are fundamental to object-oriented programming and help to create more organized, reusable and understandable code, making the development process more efficient:
Abstraction means focusing only on the most important information about an object, ignoring less relevant details. In programming, abstraction allows you to create simple and clear models of objects, hiding complex details and abstractly representing a system.
Encapsulation means that you put your data (attributes) and actions (methods) inside a class and control who can access them. This helps prevent parts of your code from unduly interfering with other parts, making your program more organized and secure.
Inheritance is like passing traits from one thing to another, like parents passing traits on to their children. In programming, you can create a new class based on another existing class, called a parent class or superclass. The new class inherits the attributes and methods of the parent class, being able to add new things or customize the behavior. This helps reuse code and create object hierarchies.
We can understand polymorphism as an object that acts in different ways depending on the context, like a key that can fit in different types of locks. In programming, polymorphism allows different classes to share the same method name, but each class implements that method in a specific way. This allows for treating different objects uniformly, making the code more flexible and adaptable.
ASP.NET Core uses the C# programming language, which is an object-oriented language and has all the necessary features to implement the four principles of OOP.
Next, we will create a simple ASP.NET Core application and implement each of these principles.
To create the example application, you need to have installed a recent version of .NET. Version 7 will be used in the post, but .NET 8 is now available.
You also need an Integrated Development Environment (IDE). This post will use Visual Studio Code, which can be used on Windows, macOS or Linux.
The source code of the application can be accessed here: LibraryManager source code.
The example application will be a Web API to manage books in a library. To create the base of the application, use the command below in the IDE terminal:
dotnet new web -o LibraryManager
Since our application is a library, the perfect example of an abstraction is a book—it’s a real-life object, with associated attributes and actions, that we will replicate in code.
We will define a class called “Book” as the main entity of the application. In C#, a class is a data structure that can contain data members and function members like properties, methods and events, among others.
Usually, classes created to represent entities are called “Model” classes, and by convention, they are located inside a folder called “Models” or “Entities.”
So, in the root of the project create a new folder called “Models” and inside it add the class below:
namespace LibraryManager.Models;
public class Book
{
public Guid Id { get; set; }
public string? Title { get; set; }
public string? Author { get; set; }
public string? Gender { get; set; }
public DateTime ReleaseDate { get; set; }
}
Note that in the Book class created above, we are abstracting the Book entity that has the same properties as the real-life book, such as title, author, gender and release date.
Encapsulation in OOP means hiding the internal details of a class and allowing controlled access to its members (attributes and methods) through access modifiers. C# has the following access modifiers:
public
: Public members are accessible from anywhere, both inside and outside the class.private
: Private members are accessible only within the class in which they were declared.protected
: Protected members are accessible within the class that declares them and in derived classes (subclasses).internal
: Internal members are accessible within the same assembly (.dll or .exe file).protected internal
: This combination allows access within the same assembly and also in derived classes, even if they are in different assemblies.To implement encapsulation in the application, let’s create a class to add some methods. Still inside the “Models” folder, create a new class called “Library” and put the code below in it:
using System.Text.Json;
namespace LibraryManager.Models;
public class Library
{
private List<Book> books;
private readonly string libraryFilePath;
public Library(string libraryFilePath)
{
this.libraryFilePath = libraryFilePath;
LoadData();
}
private void LoadData()
{
if (File.Exists(libraryFilePath))
{
string jsonData = File.ReadAllText(libraryFilePath);
books = JsonSerializer.Deserialize<List<Book>>(jsonData);
}
else
books = new List<Book>();
}
private void SaveData()
{
string jsonData = JsonSerializer.Serialize(books);
File.WriteAllText(libraryFilePath, jsonData);
}
public void AddBook(Book book)
{
if (book is EBook ebook)
books.Add(ebook);
else
books.Add(book);
SaveData();
}
public void RemoveBook(Guid bookId)
{
Book book = books.FirstOrDefault(b => b.Id == bookId);
if (book != null)
{
books.Remove(book);
SaveData();
}
}
public IEnumerable<Book> GetBooks()
{
return books;
}
}
Note that in the above code, we are declaring several access modifiers, in local variables (books
and filePathBook
) and methods (LoadData()
, AddBook()
, etc.).
The LoadData()
and SaveData()
methods are declared as private because there is no need to access them externally. Instead, they are accessed only by the class that implements them. In contrast, the methods AddBook()
,
RemoveBook()
and GetBooks()
are declared as public, as they need to be accessed by external members.
In this way, we are implementing the principle of encapsulation, through access modifiers.
The concept of inheritance in OOP refers to the fact that objects can inherit characteristics and behaviors from other objects.
To implement inheritance in the application, let’s create a new model class called “EBook” that will have all the characteristics of the “Book” class plus two exclusive properties of the book’s digital format. In this case, the “EBook” class will “inherit” from “Book” class.
Inside the “Models” folder, create a new class called “EBook” and put the code below in it:
public class EBook : Book
{
public string? Format { get; set; }
public double SizeInMB { get; set; }
}
To implement inheritance in C#, we put a colon in front of the class that will receive the inheritance (EBook). And after the colon, we put the class that will be inherited (Book), as you can see in the code above. In this way, the EBook class has the same properties as the Book class, such as ID, Title, Author, etc.
Polymorphism allows methods to be implemented by different classes and in different ways, which facilitates reuse and code organization.
In this example, we have two main entities, the “Book” class and the “EBook” class. So far, we’ve only created methods for the “Book” class. Through polymorphism, we can reuse the methods of the Book class for the EBook class.
To do this, add the following code in the “Library” class:
public IEnumerable<EBook> GetEBooks()
{
return books.OfType<EBook>();
}
In the example above, the GetEbooks()
method returns only the EBooks present in the library, using the OfType<Ebook>()
method. This is possible thanks to polymorphism, which allows treating derived objects (such as EBook)
as base objects (such as Book), as long as inheritance is correctly configured.
Now that we understand the concepts of the four pillars of OOP, let’s run the application and see in practice how the code we built is fully functional.
So first install the Swagger NuGet packages via the terminal by running the commands below:
dotnet add package Microsoft.OpenApi
dotnet add package Swashbuckle.AspNetCore
Finally, in the “Program.cs” file, replace the existing code with the code below:
using LibraryManager.Models;
var filePath = "book.json"; // Specify the path to JSON file
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddSingleton<Library>(_ => new Library(filePath));
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.MapGet("/books", (Library library) =>
{
var books = library.GetBooks();
return Results.Ok(books);
});
app.MapGet("/books/{id}", (Guid id, Library library) =>
{
var book = library.GetBooks().FirstOrDefault(b => b.Id == id);
if (book == null)
return Results.NotFound();
return Results.Ok(book);
});
app.MapPost("/books", (Book book, Library library) =>
{
library.AddBook(book);
return Results.Created($"/books/{book.Id}", book);
});
app.MapDelete("/books/{id}", (Guid id, Library library) =>
{
library.RemoveBook(id);
return Results.NoContent();
});
app.Run();
Now, run the command dotnet run
in the application terminal. Then in your browser access: http://localhost:5202/swagger/index.html
.
That way you can execute the CRUD functions and see that they are working perfectly as shown in the GIF below:
You can use this JSON example to perform the POST request:
{
"id": "c1d1a1b1-5678-1234-9abc-567890123456",
"title": "The Great Gatsby",
"author": "F. Scott Fitzgerald",
"gender": "Classic",
"releaseDate": "1925-04-10T00:00:00"
}
It is important to note that in this example scenario, we are using a JSON file that will store the data. In this case, the books.OfType<EBook>();
method will not work, as the JSON deserializer cannot distinguish a sub-class (EBook)
from the base class (Book). For this to work, you need to use an ORM like EF Core or Dapper and implement a repository that accesses the database.
As the purpose of the post is to demonstrate the use of OOP, a database scenario will not be covered, but if you wish, you can use this post: Building a CRUD API with Dapper to implement the connection to a real database.
In addition to the four pillars of OOP, ASP.NET Core has some elements essential for object-oriented development.
Interfaces define contracts that classes must implement. They let you define a common set of methods that different classes can implement in different but compatible ways.
Example:
public interface ILibraryService
{
public List<Book> FindAllBooks();
}
In ASP.NET Core, methods and properties are used to define the behavior and characteristics of classes. Methods perform actions, and properties provide access to internal data.
Example:
//Method
public void AddBook(Book book)
{
if (book is EBook ebook)
books.Add(ebook);
else
books.Add(book);
SaveData();
}
//Properties
public Guid Id { get; set; }
public string? Title { get; set; }
public string? Author { get; set; }
Constructors are unique methods used to initialize objects when they are created. In contrast, destructors are used to release resources associated with an object when it is destroyed.
Example:
//Constructor
public class Library
{
private List<Book> books;
private readonly string libraryFilePath;
public Library(string libraryFilePath)
{
this.libraryFilePath = libraryFilePath;;
}
}
//Destructor
~Library()
{
_ = this.libraryFilePath;
}
Static classes contain static members that can be accessed without creating an instance of the class. This is useful for providing shared functionality across the entire application.
Example:
public static class Helper
{
public static string ValidateBookName(string bookName)
{
if (string.IsNullOrEmpty(bookName))
return "Error: Book name is mandatory";
else
return string.Empty;
}
}
Namespaces are used to organize and group related classes into a hierarchy. They help to avoid name conflicts and to modularize the code.
Example:
namespace LibraryManager.Models;
ASP.NET Core often employs object-oriented design patterns such as MVC (Model-View-Controller) to separate business logic, presentation and user interaction.
Example:
Object orientation is a paradigm that allows developers to create robust and modular applications, facilitating scalability and maintenance.
ASP.NET Core uses the C# programming language, which implements the four paradigms of OOP (abstraction, encapsulation, inheritance and polymorphism) in addition to others such as interfaces, constructors, methods and object-oriented design patterns.
In this blog post, we saw how to implement each of the four OOP paradigms in an ASP.NET Core application and how these paradigms relate to each other.
Whenever creating an application or functionality, consider using OOP resources.
]]>Debugging web applications is essential for identifying and solving problems, either in the code or in some external mechanism. When debugging an application, the developer can follow step-by-step the execution of the code, analyze variables, inspect stacks of calls, and identify possible errors and unexpected behavior, ensuring the quality and proper functioning of the application.
Visual Studio is the main tool for developing .NET applications and offers a wide range of features for debugging in ASP.NET Core, making the process more efficient and productive.
In this blog post, we’re going to create an application in ASP.NET Core and see what are the main functions available in Visual Studio for debugging and troubleshooting.
Visual Studio is a powerful and widely used integrated development environment (IDE) developed and maintained by Microsoft. It offers a comprehensive set of tools and features to make creating, debugging and managing software projects easier.
In the context of ASP.NET Core, Visual Studio has a range of significant functionality for developing and debugging modern, scalable web applications.
Visual Studio’s built-in debugger is an essential tool for finding and fixing errors in ASP.NET Core apps. Through an intuitive interface, the integrated debugger allows developers to examine variables, follow the execution flow and identify complex problems in the code.
To create the application you need to have Visual Studio and the latest version of .NET. This post uses .NET 7, but .NET 8 is available now!
The debugger functions discussed in the post are only present in Visual Studio for Windows.
The source code of the application used in the example can be accessed here: TaskManager.
To create the application in Visual Studio, follow the steps below:
Now let’s create a record that will represent the application’s entity, which in this case will be Tasks. Create a new folder inside the project called “Models” and inside that create a new class called “TaskItem” and replace the existing code with the code below:
namespace TaskManager.Models;
public record TaskItem(Guid Id, string Name, string Description, DateTime CreationDate, DateTime DueDate);
Now let’s create a service class to return some data. As the focus of the post is on debugging the application, we won’t use a database. Instead, the data will be mocked in the service class. In the root of the project, create a new folder called “Services” and in it create a new class called “TaskService.cs” and put the following code in it:
using TaskManager.Models;
namespace TaskManager.Services;
public class TaskService
{
public List<TaskItem> FindTasks()
{
var taskList = new List<TaskItem>()
{
new TaskItem(
Id: Guid.NewGuid(),
Name: "Study ASP.NET Core",
Description: "Study ASP.NET Core for 2 hours a day",
CreationDate: DateTime.Now,
DueDate: DateTime.Now + TimeSpan.FromDays(7)
),
new TaskItem(
Id: Guid.NewGuid(),
Name: "Study ASP.NET Core",
Description: "Clean the room at 4 pm",
CreationDate: DateTime.Now,
DueDate: DateTime.Now + TimeSpan.FromDays(7)
),
new TaskItem(
Id: Guid.NewGuid(),
Name: "Submit Monthly Report",
Description: "Submit the monthly sales report by the end of the week",
CreationDate: DateTime.Now,
DueDate: DateTime.Now + TimeSpan.FromDays(5)
),
new TaskItem(
Id: Guid.NewGuid(),
Name: "Prepare Presentation",
Description: "Prepare a presentation for the upcoming client meeting",
CreationDate: DateTime.Now,
DueDate: DateTime.Now + TimeSpan.FromDays(3)
),
new TaskItem(
Id: Guid.NewGuid(),
Name: "Buy Groceries",
Description: "Buy groceries for the week",
CreationDate: DateTime.Now,
DueDate: DateTime.Now + TimeSpan.FromDays(2)
)
};
return taskList;
}
public TaskItem FindTaskByName(string name)
{
try
{
var taskList = FindTasks();
var task = taskList.SingleOrDefault(t => t.Name == name);
return task;
}
catch (Exception ex)
{
return null;
}
}
}
Note that in the code above we are defining two methods, one to return the complete list of tasks and the other returning a single task based on the given name. The next step is to add the endpoints that will access this data, so replace the Program.cs file with the code below:
using TaskManager.Services;
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddScoped<TaskService>();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.MapGet("/tasks", (TaskService service) =>
{
var tasks = service.FindTasks();
return Results.Ok(tasks);
})
.WithName("FindTasks")
.WithOpenApi();
app.MapGet("/tasks/{name}", (TaskService service, string name) =>
{
var task = service.FindTaskByName(name);
return task is null ? Results.NotFound() : Results.Ok(task);
})
.WithName("FindTaskByName")
.WithOpenApi();
app.Run();
Now, make sure that in the “Solution Configurations” tab “Debug” is selected. Run the project by clicking on the start icon in Visual Studio:
Then, in your browser, access the address https://localhost:[PORT]/swagger/index.html
and execute the second endpoint, passing the following in the “name” parameter: Study ASP.NET Core
, as shown in the image below.
Note that the result will be an HTTP status 404, this means that there was an error fetching the data. To find out what the error was, let’s debug the application.
To find the error, let’s follow the steps of executing the application. For that, let’s set a breakpoint where the exception occurs. Then, in the “TaskService.cs” class, in the line where the exception occurs, click in the corner of the Visual Studio tab to set the breakpoint, as shown in the image below:
Breakpoints are a debugging feature. You can set breakpoints where you want Visual Studio to pause your running code—that way you can observe variable values or unexpected behavior.
Now rerun the swagger endpoint passing the Study ASP.NET Core
parameter. Then open Visual Studio and see that the execution was stopped at the breakpoint, and if you click on the “ex” variables and expand it you will be able to
see its value, which in this case is the error we are looking for:
Note that the exception is saying the following: “Sequence contains more than one matching element.” This means that we are filtering a single item through the “SingleOrDefault()” method in the task list, but more than one was found. This happened because the first two tasks have the same name. To solve the problem just change the name of one of the two.
Then stop the debugger and change the name of the second task with the text “Clean the room,” then add another breakpoint where the variable “task” is returned as shown in the image below:
Then run the debugger and the endpoint in Swagger again. Notice that the debugger paused on the breakpoint before the exception. Now if you hover the mouse cursor over the task variable, you will see that the record with the name “Study ASP.NET Core” was successfully found and will be returned on the endpoint.
Visual Studio’s debugger lets you navigate between breakpoints and through lines of code through an intuitive interface. To test these functions, add the following method above the second endpoint code in the Program.cs file:
static bool ValidateTaskName(string name)
{
var userValid = true;
if (name.Length < 3)
userValid = false;
return userValid;
}
And in the second endpoint add the call to the validation method:
if (!ValidateTaskName(name))
return Results.BadRequest("The name must have at least 3 characters");
Then add a breakpoint on the “ValidateTaskName()” method call inside the second endpoint, and then start the debugger again, then in the Swagger interface, on the second endpoint, add the text st
in the name field and click
run:
In Visual Studio, click on the icon that represents a down arrow. This will make the debugger’s cursor enter the “ValidateTaskManager()” method:
Then click on the icon with a curved arrow facing to the right. This will make the debugger’s cursor move to the next line. That’s how we navigate through the code through the debugger:
To finish the execution, click on the start icon that has the name “continue.” This function is used to jump straight to the next breakpoint. As we don’t have any more settings, the debugger will execute the rest of the code all at once.
Through the Visual Studio debugger, it is possible to skip code snippets during debugging. Just position the mouse cursor over the debugger cursor and drag it to the desired area. This way it is possible to ignore validation methods, for example, without the need to comment or delete them. The GIF below demonstrates how to do this:
The debugger has functions for inspecting variable values through the Autos and Locals windows. The Autos window shows the variables used in the current line the debugger cursor is on and also in the previous line. The Locals window shows variables defined in the local scope, which is usually the current method or function.
Run the debugger, resubmit the text “Study ASP.NET Core” on the second endpoint, and look in Visual Studio at the Autos and Locals windows.
To access them, in Visual Studio select “Debug>Windows>Autos” from the menu bar for the Autos window and “Debug>Windows>Locals” for the Locals window. The images below show the windows during debugging.
You can track a variable or expression while debugging by adding it to the Watch window.
To verify this function, add this code snippet:
name = name.ToUpper();
to the first line inside the second endpoint, then, with the debugger running, right-click on the variable name and choose Add Watch.
The Watch window will appear by default at the bottom of the code editor, so step through the code to see the value of the name variable change after going through the added code snippet, as shown in the screenshots below:
Another important feature of the Visual Studio debugger is the call stack window, which allows a better understanding of the application’s execution flow, showing the order in which methods and functions are being called.
To check the call stack window, start the debugger and open the call stack window from the Visual Studio menu: Debug>Windows>Call Stack. Step through the code through the endpoints. The method calls plus other details like the line where the code is will be displayed in the call stack window as in the image below.
Visual Studio provides several shortcut keys to help you navigate and control the debugger efficiently. Below are some commonly used debugger shortcut keys:
These shortcut keys may vary depending on the version of Visual Studio you are using and any customizations you may have made. You can also view and customize shortcut keys by going to Tools > Options > Environment > Keyboard
in the Visual Studio menu.
Visual Studio is a formidable tool for developing and debugging ASP.NET Core applications. Through the Visual Studio debugger, it is possible to find bugs quickly by inspecting values of variables and other objects, following the process execution flow with the Call Stack window, in addition to several other resources.
In this post, we saw some of the main functions of the Visual Studio debugger and how to use them, so whenever you need to analyze a problem during development, consider using the advanced features of the debugger and increase your productivity even more.
]]>Dependency injection (DI) is a design pattern widely used in software development, and understanding it is a basic requirement for anyone wanting to work with web development in ASP.NET Core. The purpose of DI is to promote code modularity, reusability and testability. In ASP.NET Core specifically, DI plays a crucial role in building robust and scalable applications.
This post explains the concept of DI simply, describes the relationships between DI, Inversion of Control (IoC), Dependency Inversion Principle (DIP) and Service Locator, and demonstrates how to implement DI, complete with code samples.
DI is a concept that allows external actors, such as construction parameters, properties or configuration methods, to provide dependencies of a class rather than creating them within the class. This reduces coupling between system components, making the code more stable and modular.
Using dependency injection in ASP.NET Core, you can gain several advantages, including:
The schema below demonstrates how ASP.NET Core handles DI.
Next, let’s create a simple application in ASP.NET Core to demonstrate how to implement DI. Then we’ll check out the same example, but without dependency injection and see what problems this can bring.
To create the example in this post, you need to install the .NET SDK, version 7 or newer.
You also need a terminal to run .NET commands. You can run them directly from an IDE if you prefer; this example uses Visual Studio Code.
You can access the source code of this post’s examples on GitHub.
To create the base application, run the following command in the terminal:
dotnet new web -o ContactRegister
Open the newly created project with your favorite IDE. In the project, create a new folder called Models
and inside it create a new file called Contact.cs
. Replace the existing code with the code below:
namespace ContactRegister.Models;
public record Contact(Guid Id, string Name, string Email, string PhoneNumber);
Create a new folder called Data
and inside it create a new interface called IContactRepository.cs
. Put the code below in it:
using ContactRegister.Models;
namespace ContactRegister.Repository;
public interface IContactRepository{
public List<Contact> FindContacts();
}
Still inside the Data
folder, create a new class called ContactRepository.cs
and put the code below in it:
using ContactRegister.Models;
namespace ContactRegister.Repository;
public class ContactRepository : IContactRepository
{
public List<Contact> FindContacts(){
var contacts = new List<Contact>(){
new Contact(Guid.NewGuid(), "John Smith", "jsmith@examplemail.com", "987654321"),
new Contact(Guid.NewGuid(), "Amy Davis", "amy@examplemail.com", "987654321")
};
return contacts;
}
}
Note that in the previous code, you created a record to represent the contact entity, then you created an interface and a class that has a method that returns a list of contacts.
Now, create a new folder called Services
. Inside it, create a new class called ContactService.cs
and put in the code below:
using ContactRegister.Repository;
using ContactRegister.Models;
namespace ContactRegister.Services;
public class ContactService {
private readonly IContactRepository _repository;
public ContactService(IContactRepository repository)
{
_repository = repository;
}
public List<Contact> FindAllContacts() =>
_repository.FindContacts();
}
Note that in the code above, in order to use the FindContacts()
method of the ContactRepository
class, you are injecting the dependency through the declaration of the class private readonly IContactRepository _repository;
. You are then passing the class ContactRepository
in the constructor of the class ContactService
, like so:
public ContactService(IContactRepository repository)
{
_repository = repository;
}
This way, every time the ContactService
class is instantiated, a new instance of the ContactRepository
class will be created and will be available for use.
In ASP.NET Core, Inversion of Control (IoC) is a design pattern where the responsibility for creating and managing objects is transferred to an IoC container, rather than being controlled directly by application code.
IoC promotes decoupling and modularity in application development. Rather than a class directly depending on other classes or instantiating objects directly, it declares its dependencies through interfaces or abstract base classes. The IoC container is responsible for resolving these dependencies and providing the necessary implementations.
In newer versions of ASP.NET Core, you can configure the IoC container through the Program
class. You can register your application’s dependencies using the AddTransient
, AddScoped
and AddSingleton
methods, depending on the required lifecycle for each service.
The IoC container manages the creation of these objects and ensures that dependencies are correctly resolved. Using IoC, we’re delegating the responsibility of dealing with the DI to ASP.NET Core native resources rather than doing it manually.
To implement IoC in your app, add the code below in the Program.cs
file:
builder.Services.AddSingleton<IContactRepository, ContactRepository>();
Note that in the above code, you’re passing the ContactRepository
class and the IContactRepository
interface to the AddSingleton
extension. This is one of the ways to implement dependency injection in ASP.NET Core.
In the .NET ecosystem, there are three main forms supported by ASP.NET Core’s native dependency injection framework:
AddSingleton
: This method registers a dependency as a singleton. This means that a single instance of the service will be created and used by the entire application. Example:builder.Services.AddSingleton<IContactRepository, ContactRepository>();
AddScoped
: This method registers a scoped dependency. It ensures that a single instance of the service is created and used for the lifetime of a request. This means that each request receives a different instance of the dependency. Example:builder.Services.AddScoped<IContactRepository, ContactRepository>();
AddTransient
: This method registers a dependency as transient. This means that a new instance of the service is created each time it’s requested. Example:builder.Services.AddTransient<IContactRepository, ContactRepository>();
To make the API functional, you just need to create an endpoint to access the data. Still in the Program.cs
file, add the code below:
app.MapGet("/contacts", (IContactRepository repository) => {
var contacts = repository.FindContacts();
return Results.Ok(contacts);
});
If you run the command dotnet run
in the terminal and access the address http://localhost:PORT
in your browser, you’ll get the following result:
Note that the dependency injection worked, and you’re able to access the data.
Now, let’s see what it would be like if you did the same thing but without using dependency injection. In this case, the ContactService
class would look like this:
public class ContactService
{
private readonly IContactRepository _repository;
public ContactService()
{
_repository = new ContactRepository(); // Manual dependency creation
}
// ...
}
Note that this way, instead of passing the instance of the ContactRepository
class in the constructor of the service class, a new instance of the ContactRepository
class is created manually through the new
operator.
This practice is wrong and should not be used as it can cause several problems such as:
ContactClass
class is tightly coupled to the concrete repository implementation. This makes it difficult to replace the implementation with another one without directly modifying the class code. This limits the flexibility and extensibility of the system.The Dependency Inversion Principle (DIP) refers to one of the principles of SOLID, a set of software design guidelines that promotes code modularity, flexibility and maintainability. DIP states that high-level classes should not directly depend on low-level classes. Instead, they must rely on abstractions.
In the context of ASP.NET Core, DIP implementation is achieved through the use of interfaces or abstract classes to define contracts and abstractions. Instead of high-level classes directly depending on low-level classes, they rely on interfaces or abstract classes that represent these dependencies.
ASP.NET Core uses dependency injection (DI) to implement DIP. As discussed earlier, it’s through DI that dependencies are injected into classes at runtime, rather than being created or instantiated directly in code. This promotes loose coupling between classes and makes replacing implementations easier; you can easily provide different implementations of a dependency without modifying the code that uses it.
In short, DIP in ASP.NET Core is achieved by applying the principle of inverting dependencies through the use of DI.
When you talk about DI in languages like C# and Java, you’re going to run across the term Service Locator a lot.
Service Locator is an old design pattern that allows you to get instances of services through a centralized locator. Although it was used in some older applications and frameworks, it has some disadvantages compared to DI:
In ASP.NET Core, Service Locator is neither an officially supported design pattern nor recommended by the framework. The recommended approach to dependency resolution in ASP.NET Core is dependency injection.
As we saw in this post, ASP.NET Core has a robust native dependency injection engine that offers advanced features such as lifecycle control, service configuration and support for service abstraction.
Microsoft documentation recommends avoiding the use of the Service Locator pattern.
In short, Service Locator can be useful in specific scenarios where it’s not possible to use DI, such as with legacy code or dynamic configuration of services, but it’s always preferable to use dependency injection.
At first glance, dependency injection may seem like a complex and difficult subject, but as shown throughout the post, it’s possible to implement DI in a simple way, using only native features of ASP.NET Core.
Every developer working with object-oriented languages such as C# should understand how DI. Its implementation will be common in their work routine, especially when creating any application using ASP.NET Core.
]]>The database is the most common form of storage in web applications, allowing applications to store and retrieve information reliably and at scale—you could consider it an essential and useful component in any project. Let’s check out how to access and use a database in an ASP.NET Core app using EF Core and Dapper.
One of the main features offered by ASP.NET Core, an open-source web development framework, is its ability to seamlessly integrate with different relational database management systems such as SQL Server, MySQL and PostgreSQL. It also integrates with non-relational databases such as MongoDB and Firebase. This provides companies with a wide variety of options to choose a database best suited to their needs.
ASP.NET Core has an excellent resource for working with databases, which already has several abstracted implementations. Developers don’t need to reinvent the wheel every time they need to create a new application. I’m talking of course about the famous Object-Relational Mapping (ORM) maintained by Microsoft, the EF Core.
The Entity Framework Core is a widely used ORM that facilitates interaction between the application and the database. It simplifies everyday tasks, such as creating tables and queries, providing a set of tools for code-based data manipulation. This speeds up development and reduces the code needed to work with databases.
In this article, you’ll learn how to implement a web application in ASP.NET Core from scratch and integrate it with a database. You’ll also learn best practices using EF Core and the micro-ORM Dapper.
This article assumes that you already have a basic knowledge of working with databases such as understanding SQL and basic commands such as SELECT
, INSERT
, UPDATE
and DELETE
for manipulating and querying
data.
We’ll be using version 7 of .NET SDK, the recommended version at the time this article was written.
This article uses Visual Studio, but feel free to use another IDE of your choice.
Let’s create a simple application using the native ASP.NET Core template for minimal APIs. You’ll add the EF Core dependencies and create the database through the Code First approach. Finally, you’ll create the necessary methods for manipulating the data.
You can access the full project source code here: PetManager Source Code.
In Visual Studio, follow these steps to get started quickly:
Next, create the main entity of your application through which EF Core will design the database. When you run EF Core commands, the table is generated from the class name and the table columns will mirror the class properties.
In the root of the project, create a new folder called Models. Inside the folder, create a new class called Pet. Open the newly created class and replace the existing code with the code below:
namespace PetManager.Models;
public class Pet
{
public Guid Id { get; set; }
public string? Name { get; set; }
public string? Species { get; set; }
public string? Breed { get; set; }
public int Age { get; set; }
public string? Color { get; set; }
public double Weight { get; set; }
public bool Vaccinated { get; set; }
public string? LastVaccinationDate { get; set; }
public Owner? Owner { get; set; }
public Guid OwnerId { get; set; }
}
public class Owner
{
public Guid Id { get; set; }
public string? Name { get; set; }
public string? Email { get; set; }
public string? Phone { get; set; }
}
Now let’s create the class responsible for communicating with EF Core. Through this, EF Core will create the connection to the database and also convert the entity from the database to the class you created earlier.
In order to use EF Core features, download the package for the project in Visual Studio:
To properly run EF Core commands, download the following NuGet packages:
In the root of the project, create a new folder called Data. Inside the folder, create a new class called PetDbContext and replace the generated code with the code below:
using Microsoft.EntityFrameworkCore;
using PetManager.Models;
namespace PetManager.Data;
public class PetDbContext : DbContext
{
protected override void OnConfiguring(DbContextOptionsBuilder options) =>
options.UseSqlite("DataSource=petManagerDb; Cache=Shared");
public DbSet<Pet> Pets { get; set; }
public DbSet<Owner> Owner { get; set; }
}
Note that the PetDbContext
class inherits the functions of the DbContext
class. Through the OnConfiguring()
method, you’re passing the database settings, defining the name (petManagerDb
) and confirming
that the database cache will be shared.
The Pet
class is mapped to the database entity through the implementation of DbSet<Pet> Pets { get; set; }
and the Owner
class DbSet<Owner> Owner { get; set; }
.
EF Core can create the database based on the project entity by just running a few commands. However, in order to do that, you need to have EF installed globally on your machine:
dotnet tool install --global dotnet-ef
To run EF commands in Visual Studio, follow these steps:
dotnet ef migrations add InitialModel
to create the database scripts.dotnet ef database update
to create the database in the root of the project.After running the commands, you should have a result like the image below:
Since the database was created at the root of the project, your next step is to create the API routes to populate the database.
Replace the code in the Program.cs
file with the code below:
using Microsoft.EntityFrameworkCore;
using PetManager.Data;
using PetManager.Models;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddTransient<PetDbContext>();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.MapGet("/pets", (PetDbContext db) =>
{
var pets = db.Pets;
var owners = db.Owner;
foreach (var petItem in pets)
{
var owner = owners.SingleOrDefault(o => o.Id == petItem.OwnerId);
petItem.Owner = owner;
}
return Results.Ok(pets);
});
app.MapGet("/pets/{id}", (Guid id, PetDbContext db) =>
{
var pets = db.Pets;
var pet = pets.SingleOrDefault(p => p.Id == id);
if (pet == null)
return Results.NotFound();
var owners = db.Owner;
var owner = owners.SingleOrDefault(o => o.Id == pet.OwnerId);
pet.Owner = owner;
return Results.Ok(pet);
});
app.MapPost("/pets", (PetDbContext db, Pet pet) =>
{
var pets = db.Pets;
db.Pets.Add(pet);
db.SaveChanges();
return Results.Created($"/pets/{pet.Id}", pet);
});
app.MapPut("/pets/{id}", (Guid id, Pet pet, PetDbContext db) =>
{
db.Entry(pet).State = EntityState.Modified;
db.Entry(pet.Owner).State = EntityState.Modified;
db.SaveChanges();
return Results.Ok(pet);
});
app.MapDelete("/pets/{id}", (Guid id, PetDbContext db) =>
{
var pets = db.Pets;
var petEntity = db.Pets.SingleOrDefault(p => p.Id == id);
if (petEntity == null)
return Results.NotFound();
var owners = db.Owner;
var owner = owners.SingleOrDefault(o => o.Id == petEntity.OwnerId);
owners.Remove(owner);
pets.Remove(petEntity);
db.SaveChanges();
return Results.NoContent();
});
app.UseHttpsRedirection();
app.Run();
The code above adds a dependency injection for the PetDbContext class through the code builder.Services.AddTransient();
. You’re also implementing the API endpoints to execute the CRUD functions.
The endpoints are calling the methods you created in the database class.
Your API is ready to run, so in Visual Studio click the Play icon. A window will open in your browser and you can now test the application through the Swagger interface.
Expand the POST tab and click Try out. In the tab that opens, paste the JSON below and click Execute.
{
"id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"name": "Max",
"species": "Dog",
"breed": "Golden Retriever",
"age": 3,
"color": "Golden",
"weight": 30,
"vaccinated": true,
"lastVaccinationDate": "2023-05-01",
"owner": {
"id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"name": "John Doe",
"email": "johndoe@example.com",
"phone": "123-456-7890"
}
}
With this procedure, you’ve just created a record in the database. Now, go into the GET tab and click Try out > Execute to fetch the record you created.
The GIF below demonstrates the procedure for creating and searching a record.
This article uses SQLite, a relational database well-known for its simplicity, where the file containing the database and the tables are stored in the application itself.
To view the database data, this tutorial uses the SQLite View Editor for Windows, but you can use any other software you prefer.
To create a connection with the database, open the SQLite Viewer, select the type SQLite and click Choose file. Select the petManagerDb
file in the root of the project and click the connection. You can see
the process in the image below:
With the connection open, you can search the database, as shown in the image below:
Note that the Pets
table is making a foreign key relationship with the Owner
table through the OwnerId
column. When you use a class as a property in another class, EF Core identifies that a relationship exists between
them. When you run the Migrations commands, the SQL scripts created are already prepared for the creation of foreign keys.
Dapper is a .NET data access library. It was developed to simplify and streamline the process of accessing relational databases, allowing developers to write SQL queries directly and efficiently.
Unlike other more complex ORM solutions, such as the Entity Framework, Dapper is a micro-ORM. A minimalist library, Dapper focuses only on the task of mapping SQL query results into .NET objects. It delivers exceptionally fast performance due to its lightweight and straightforward approach.
To use the MySQL database, you need to have previously configured a MySQL server locally. This article does not cover how to configure MySQL, but tutorials are readily available online.
Dapper doesn’t have native resources for generating databases and tables like EF Core, so you need to create them manually. You can use the SQL commands below to create the database and tables in MySQL:
-- Create the database
CREATE DATABASE `petmanagerdb` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci */ /*!80016 DEFAULT ENCRYPTION='N' */;
USE `petmanagerdb` ;
-- Create pet table
CREATE TABLE `pet` (
`Id` char(36) NOT NULL,
`Name` varchar(255) DEFAULT NULL,
`Species` varchar(255) DEFAULT NULL,
`Breed` varchar(255) DEFAULT NULL,
`Age` int DEFAULT NULL,
`Color` varchar(255) DEFAULT NULL,
`Weight` double DEFAULT NULL,
`Vaccinated` tinyint(1) DEFAULT NULL,
`LastVaccinationDate` date DEFAULT NULL,
`OwnerId` char(36) NOT NULL,
PRIMARY KEY (`Id`),
KEY `OwnerId` (`OwnerId`),
CONSTRAINT `pet_ibfk_1` FOREIGN KEY (`OwnerId`) REFERENCES `owner` (`Id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
-- Create owner table
CREATE TABLE `owner` (
`Id` char(36) NOT NULL,
`Name` varchar(255) DEFAULT NULL,
`Email` varchar(255) DEFAULT NULL,
`Phone` varchar(20) DEFAULT NULL,
PRIMARY KEY (`Id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
To work with Dapper, you need to download two NuGet packages:
You can also download them through Manage NuGet Packages in Visual Studio.
You need to create a new entity to represent the tables in MySQL. I nside the Models folder create a new class called PetModel and replace the generated code with the code below:
namespace PetManager.Models;
public class PetDto
{
public string? Id { get; set; }
public string? Name { get; set; }
public string? Species { get; set; }
public string? Breed { get; set; }
public int Age { get; set; }
public string? Color { get; set; }
public double Weight { get; set; }
public bool Vaccinated { get; set; }
public string? LastVaccinationDate { get; set; }
public string? OwnerId { get; set; }
public OwnerDto? Owner { get; set;}
}
public class OwnerDto
{
public string? Id { get; set; }
public string? Name { get; set; }
public string? Email { get; set; }
public string? Phone { get; set; }
}
Still inside the Models folder, add a new class called ConnectionString and replace the generated code with the code below:
namespace PetManager.Models;
public class ConnectionString
{
public string? ProjectConnection { get; set; }
}
This class will be used to store the MySQL connection string.
Now let’s create the repository class that will run the queries in MySQL. I nside the Data folder, create a new class called PetRepository and replace the generated code with the code below:
using Dapper;
using Microsoft.Extensions.Options;
using MySql.Data.MySqlClient;
using PetManager.Models;
using System.Data;
namespace PetManager.Data;
public class PetRepository
{
private readonly IDbConnection _dbConnection;
public PetRepository(IOptions<ConnectionString> connectionString)
{
_dbConnection = new MySqlConnection(connectionString.Value.ProjectConnection);
}
public async Task<List<PetDto>> GetAllPets()
{
using (_dbConnection)
{
string query = "select * from pet";
var pets = await _dbConnection.QueryAsync<PetDto>(query);
return pets.ToList();
}
}
public async Task<OwnerDto> GetOwner(string ownerId)
{
using (_dbConnection)
{
string query = "select * from owner where id = @OwnerId";
var owner = await _dbConnection.QueryAsync<OwnerDto>(query, new { OwnerId = ownerId });
return owner.SingleOrDefault();
}
}
}
Note that in the code above, you’re defining methods to search for entities in the database; you’ll pass a query string, which contains the SQL code necessary to return the data.
Now let’s create the MySQL connection string. Open the appsettings.json
file in the root of the project and replace the existing code with the code below:
{
"ConnectionStrings": {
"ProjectConnection": "host=localhost; port=YOUR_MYSQL_PORT; database=petmanagerdb; user=YOUR_MYSQL_USER; password=YOUR_MYSQL_PASSWORD;"
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*"
}
Replace
YOUR_MYSQL_PORT
,YOUR_MYSQL_USER
andYOUR_MYSQL_PASSWORD
with your local MySQL settings.
The last step is to add the configurations of the previously created classes. In the Program.cs
file, add the lines of code below:
builder.Services.AddSingleton<PetRepository>();
builder.Services.Configure<ConnectionString>(builder.Configuration.GetSection("ConnectionStrings"));
You also need to add the endpoint that will use the MySQL database. In the Program.cs
file, add the code below:
app.MapGet("/pets/viadapper", async (PetRepository db) =>
{
var pets = await db.GetAllPets();
foreach (var pet in pets)
pet.Owner = await db.GetOwner(pet.OwnerId);
return Results.Ok(pets);
});
To test it, run the application and use Swagger to run the /pets/viadapper
endpoint as shown below:
The database is an ever-present theme in web application development, and ASP.NET Core has all the necessary features to make it easier for developers to work with them. There are many options for working with databases in ASP.NET Core; this article addressed two of the main ones.
You learned how to create a web application in ASP.NET Core and connect it to a relational database using the Code First approach with the EF Core ORM solution. You also learned how to use a MySQL database with the micro-ORM Dapper.
]]>NuGet packages play a vital role in ASP.NET Core development, providing an easy way to manage dependencies, improve efficiency, ensure security and speed up the development process—making them indispensable for building modern and robust web applications, and saving time and effort.
In the first part of Essential NuGet Packages for Beginners, we explored five packages that help simplify the development of a web application by automatically generating databases and tables, registration of information, validations and documentation.
In this second part, we will check out five more packages that will help you to evolve an application, making it easier to maintain and guarantee its quality.
By the end of the post, you can develop robust and good-quality web applications in ASP.NET Core using five NuGet packages: Dapper, RestSharp, Newtonsoft.json, XUnit and Humanizer.
For the example in this post, we will create a minimal API. You can access the application source code here.
To create the API base template, run the command below in the terminal:
dotnet new web -o CustomerManagement
Open the application with your favorite IDE—this post uses Visual Studio Code.
Now, let’s create the classes that will represent the application’s entity. Inside the project, create a new folder called “Models” and, inside that, create the classes and the record below:
namespace CustomerManagement.Models;
public class Customer
{
public Guid Id { get; set; }
public string Name { get; set; }
public string Email { get; set; }
public Address Address { get; set; }
public Customer(Guid id, string name, string email, Address address)
{
Id = id;
Name = name;
Email = email;
Address = address;
}
}
namespace CustomerManagement.Models;
public class Address
{
public Guid Id { get; set; }
public string Street { get; set; }
public string City { get; set; }
public string State { get; set; }
public string PostalCode { get; set; }
public string Country { get; set; }
public Guid CustomerId { get; set; }
public Address(Guid id, string street, string city, string state, string postalCode, string country, Guid customerId)
{
Street = street;
City = city;
State = state;
PostalCode = postalCode;
Country = country;
CustomerId = customerId;
}
}
namespace CustomerManagement.Models;
public record CustomersByCountryDto(Guid Id, string Name, string Email);
Now let’s install the first NuGet package, an excellent ORM for working with data, Dapper.
Dapper is an open-source object-relational mapping (ORM) library developed for .NET that aims to facilitate access and manipulation of data in relational databases through a lightweight and high-performance alternative to traditional ORM frameworks.
Dapper is widely used as it allows developers to run SQL queries directly in their applications, mapping the results to C# objects efficiently and quickly. Using reflection and metaprogramming capabilities, Dapper minimizes overhead and improves efficiency, making it a popular choice for projects that require optimized performance in .NET applications.
So, use the command below to install Dapper in the project:
dotnet add package Dapper
An important point is that in this example we will use MySQL as the database, so you need to have a MySQL server running in your local environment. This post does not teach you how to configure MySQL locally, but there are several tutorials on the internet teaching how to do this. The GitHub project repository has a docker configuration file if you prefer to use MySQL in a docker container.
Another required NuGet package is MySql.Data, used to perform SQL operations on the MySQL database. Then, use the command below to download MySQL.Data dependencies into the project.
dotnet add package MySql.Data
To use Dapper, let’s create a repository class to connect to the database. Then create a new folder called “Data” and inside it create the interface and class below:
using CustomerManagement.Models;
public interface ICustomerRepository
{
Task<List<CustomersByCountryDto>> FindByCountry(string country);
}
using System.Data;
using CustomerManagement.Models;
using Dapper;
using Microsoft.Extensions.Options;
using MySql.Data.MySqlClient;
namespace CustomerManagement.Data;
public class CustomerRepository : ICustomerRepository
{
private readonly IDbConnection _db;
public CustomerRepository(IOptions<ConnectionString> connectionString)
{
_db = new MySqlConnection(connectionString.Value.ProjectConnection);
}
public async Task<List<CustomersByCountryDto>> FindByCountry(string country)
{
string query = @"select
c.id,
c.name,
c.email
from customers c
inner join addresses a
on c.id = a.customerId
where a.country = @Country";
var customersByCountry = await _db.QueryAsync<CustomersByCountryDto>(query, new { Country = country });
return customersByCountry.ToList();
}
}
Note that in the code above we are passing the database connection string to the class constructor and creating a method to return data from the database customers.
Note also that we declare the SQL code in a string and pass it to Dapper’s QueryAsync()
method to execute.
The database and tables don’t exist yet so you can use the SQL script below to create the database and tables, and insert some sample data. Just run them on your local MySQL server.
-- Create customer_db database
CREATE DATABASE IF NOT EXISTS customer_db;
-- Using customer_db database
USE customer_db;
-- Create customers table
CREATE TABLE IF NOT EXISTS customers (
id CHAR(36) PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(100) NOT NULL
);
-- Create addresses table
CREATE TABLE IF NOT EXISTS addresses (
id CHAR(36) PRIMARY KEY,
street VARCHAR(100) NOT NULL,
city VARCHAR(50) NOT NULL,
state VARCHAR(50) NOT NULL,
postalCode VARCHAR(20) NOT NULL,
country VARCHAR(50) NOT NULL,
customerId CHAR(36) NOT NULL,
CONSTRAINT fk_addresses_customers FOREIGN KEY (customerId)
REFERENCES customers(Id)
ON DELETE CASCADE
);
-- Using the customer_db database
USE customer_db;
-- Insert in customers
INSERT INTO customers (id, name, email)
VALUES
('ba7991a3-22c3-473b-8421-676b714c2181', 'John Doe', 'john.doe@example.com'),
('8bc247b0-bfa8-44d9-b1ff-0533655d74c9', 'Jane Smith', 'jane.smith@example.com'),
('81f4a9ef-2cc3-422d-89fd-6ba7c4a5fda3', 'David D. Clifford', 'david.cli@example.com');
-- Insert in addresses
INSERT INTO addresses (id, street, city, state, postalCode, country, customerId)
VALUES
('b41a07a6-6cfc-4195-8579-c6bf0b605ea1', '2592 Boundary Street', 'Jacksonville', 'Florida', '32202', 'USA', 'ba7991a3-22c3-473b-8421-676b714c2181'),
('6b155fb2-3be8-4fdc-8c00-947670d28644', '456 Elm Avenue', 'Hythe', 'Alberta', '67890', 'Canada', '8bc247b0-bfa8-44d9-b1ff-0533655d74c9'),
('a432b04c-c380-46f9-bbc2-7b68240c557d', '1224 James Martin Circle', 'Columbus', 'Ohio', '43212', 'USA', '81f4a9ef-2cc3-422d-89fd-6ba7c4a5fda3');
The next step is to add the connection string and repository class configuration. In the “appsettings.json” file found at the root of the project, add the code below. Remember to change it with your MySQL credentials.
"ConnectionStrings": {
"ProjectConnection": "host=localhost; port=3306; database=customer_db; user=YOURMYSQLUSER; password=YOURMSQLPASSWORD;"
},
Now in the “Program.cs” file, add the following lines of code just below where the “builder” variable is created:
builder.Services.AddTransient<ICustomerRepository, CustomerRepository>();
builder.Services.Configure<ConnectionString>(builder.Configuration.GetSection("ConnectionStrings"));
The last step to test the API is to create an endpoint to access the repository and return the data. Still in the “Program.cs” file, add the code below:
app.MapGet("/v1/customers/by_country/{country}", async ([FromServices] ICustomerRepository repository, string country) =>
{
var customers = await repository.FindByCountry(country);
return customers.Any() ? Results.Ok(customers) : Results.NotFound("No records found");
})
.WithName("FindCustomersByCountry");
To test the application, just run the command below in the terminal:
dotnet run --urls=http://localhost:5000
If you access in your browser the address http://localhost:5000/v1/customers/by_country/USA
, you will have the following result:
Our API is functional and returning data correctly, but imagine that you need to call the endpoint /v1/customers/by_country
through another API. How would that be done? There are several ways, and one of the simplest and most straightforward is through a NuGet package called RestSharp.
RestSharp is a NuGet package that makes communicating with RESTful APIs simpler and more efficient. Through its advanced set of features, RestSharp allows developers to easily send HTTP requests such as GET, POST and DELETE and also supports manipulation of data in standard formats such as JSON and XML.
In addition, the library facilitates object serialization and deserialization, making application integration with web services more fluid by automatically converting data between object formats and data structures.
RestSharp allows developers to focus on application logic while the library handles the complex aspects of inter-API communication.
To implement RestSharp, we are going to create a new API. So use the command below in the terminal:
dotnet new web -o CustomerProcess
Then open the project with your IDE and run the following command in the terminal to download the RestSharp dependency in the project:
dotnet add package RestSharp
Now let’s implement the communication with the API we created earlier. For that, in the “CustomerProcess” project, replace the existing code in the “Program.cs” file with the following code:
using RestSharp;
using Microsoft.AspNetCore.Mvc;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddTransient<IRestClient, RestClient>();
var app = builder.Build();
app.MapGet("/v1/get_customers_by_country/{country}", async ([FromServices] IRestClient restClient, string country) =>
{
var request = new RestRequest($"http://localhost:5000/v1/customers/by_country/{country}", Method.Get);
var response = await restClient.ExecuteAsync(request);
if (response.IsSuccessful)
return Results.Ok(response.Content);
else
return Results.BadRequest($"Error: {response.StatusCode}");
});
app.Run();
In the code above we are creating the RestSharp configuration through the AddTransient<IRestClient, RestClient>()
method. Then we define an endpoint to access the API, which creates a new object of type RestRequest
passing the route of the API created earlier.
In real-world scenarios, external API routes are typically accessed through files with a “.env” extension, but to keep things simple, in this example, we declare the route directly in the object.
Finally, we use RestSharp’s ExecuteAsync()
method to execute the request and return the result.
To test, make sure that the API of the “CustomerManagement” project is running on port 5000, and then run the following command in the terminal of the “CustomerProcess” project:
dotnet run --urls=http://localhost:5054
Then, in your browser, go to the following address:
http://localhost:5054/v1/get_customers_by_country/USA
And you should have the following result:
Note that we can access the API that returns customer data easily by passing the route to be accessed to RestSharp. In the post scenario, we have only one API, but imagine if we had dozens. It would be very simple to implement multiple requests.
In this post, we are just looking at a simple implementation of RestSharp, but there are many other resources available. If you wish, you can explore them on the official RestSharp website.
Despite being simple, Newtonsoft.Json is very useful, as it facilitates the work of web developers who need to deal with request and response data on a daily basis.
Furthermore, the ability to customize the serialization and deserialization process, along with advanced features such as LINQ support, make the package a versatile and powerful tool for manipulating JSON data in a variety of development scenarios.
To download NewtonSoft.Json into the “CustomerProcess” project, use the following command:
dotnet add package Newtonsoft.Json
Now, still in the “CustomerProcess” project, create a new folder called “Models” and, inside it, create the record below:
namespace CustomerProcess.Models;
public record Customer(Guid Id, string Name, string Email);
Then in the “Program.cs” file, replace the existing code with the following code:
using RestSharp;
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json;
using CustomerProcess.Models;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddTransient<IRestClient, RestClient>();
var app = builder.Build();
app.MapGet("/v1/get_customers_by_country/{country}", async ([FromServices] IRestClient restClient, string country) =>
{
var request = new RestRequest($"http://localhost:5000/v1/customers/by_country/{country}", Method.Get);
var response = await restClient.ExecuteAsync(request);
var customers = JsonConvert.DeserializeObject<List<Customer>>(response.Content);
if (response.IsSuccessful)
return Results.Ok(customers);
else
return Results.BadRequest($"Error: {response.StatusCode}");
});
app.Run();
Note that in the code above we are receiving data from the API of the “CustomerManagements” project and passing the response to the Newtonsoft.Json method: JsonConvert.DeserializeObject<List<Customer>>(response.Content)
and returning the JSON object in the API response.
Now let’s test and see the difference. Run both projects and in your browser access the address http://localhost:5054/v1/get_customers_by_country/USA
again in your browser. You will now have the data formatted, as Newtonsoft.JSON deserializes the object.
XUnit is a widely used unit testing library in the .NET ecosystem.
Through a simple and extensible approach to writing and running tests, XUnit becomes an excellent option for test-driven development (TDD), in addition to supporting advanced features such as parameterized tests and context sharing between test cases, making it a popular choice for projects of all sizes.
To learn about some features of XUnit, let’s create a class with a method to validate customer data in the CustomerProcess project. Create a new folder called “Services” and, inside it, create a new class called “CustomerService.cs” and put the following code in it:
using CustomerProcess.Models;
namespace CustomerProcess.Services;
public class CustomerService
{
public string ValidateCustomer(Customer customer)
{
string errorMessage = string.Empty;
if (string.IsNullOrEmpty(customer.Name))
errorMessage += "Customer name cannot be null or blank";
if (string.IsNullOrEmpty(customer.Email))
errorMessage += "Customer email cannot be null or blank";
return errorMessage;
}
}
Now let’s create a test project. By definition, ASP.NET Core already has a template for implementing an XUnit test project. To create it, just run the commands below:
dotnet new xunit -n CustomerProcessTest
cd CustomerProcessTest
dotnet add reference ../CustomerProcess.csproj
When executing the above commands, a test project with the name “CustomerProcessTest” was created. When you open it you will notice that there is a file with the name “Test1.cs”. Rename it “ValidationTest” and then replace the existing code with the code below:
using CustomerProcess.Models;
using CustomerProcess.Services;
namespace CustomerProcessTest
{
public class ValidationTest
{
[Fact]
public void ValidateCustomerValid()
{
//Arrange
var customer = new Customer(Guid.NewGuid(), "John", "john@mail.com");
var service = new CustomerService();
//Act
string errorMessage = service.ValidateCustomer(customer);
//Assert
Assert.Empty(errorMessage);
}
[Fact]
public void ValidateCustomerInValid()
{
//Arrange
var customer = new Customer(Guid.NewGuid(), string.Empty, string.Empty);
var service = new CustomerService();
//Act
string errorMessage = service.ValidateCustomer(customer);
//Assert
Assert.NotEmpty(errorMessage);
Assert.Contains("Customer name cannot be null or blank", errorMessage);
Assert.Contains("Customer email cannot be null or blank", errorMessage);
}
}
}
Note that, in the above code, we are creating two test methods. Both have the [Fact]
attribute to indicate to XUnit that they should be executed. It is common to find the structure Arrange > Act > Assert in unit tests—where in Arrange we define the variables and objects, in Act the test is executed and in Assert the result is verified.
In this example, in the first method we are validating if a customer is valid. In this case, the string “errorMessage” must be empty. In the second test, we are checking if it is invalid; in this case, the string must be filled in and with the corresponding error messages.
To run the tests, just open a terminal in the “CustomerProcessTest” project and run the command dotnet test
and you should have the following result:
Humanizer is a very useful NuGet package for ASP.NET Core apps as it makes it easy to format data, making it more readable and user-friendly.
Humanizer’s features include number and quantity formatting, date and time formatting, pluralization and singularization, and capitalization, among others, which help developers format numbers, dates, times and quantities in a more natural and understandable way, without the need to implement complex methods and functions.
Then, in the terminal of the Project “CustomerProcess” execute the following command to download the Humanizer in the project:
dotnet add package Humanizer
Now, let’s explore some of Humanizer’s features.
With Humanizer, we can transform numbers written in full. For example, the number 1000 becomes one thousand.
To do this, in the Program file add a reference to the Humanizer: using Humanizer;
. Then add the code below:
int number = 1000;
string formattedNumber = number.ToWords();
Console.WriteLine(formattedNumber);
Now if you run the command dotnet run
you will get the following result:
To format dates and times just use the extension methods:
date.Humanize()
to indicate the current momentpastDate.Humanize()
to indicate a moment in the pastfutureDate.Humanize()
to indicate a time in the futureSo to test the date and time formatting functions, add the following code:
DateTime date = DateTime.Now;
string humanizedDate = date.Humanize();
Console.WriteLine(humanizedDate);
DateTime pastDate = DateTime.Now.AddHours(-2);
string humanizedPastDate = pastDate.Humanize();
Console.WriteLine(humanizedPastDate);
DateTime futureDate = DateTime.Now.AddDays(1);
string humanizedFutureDate = futureDate.Humanize();
Console.WriteLine(humanizedFutureDate);
Run the dotnet run
command again, and you will have the following output in the console:
Another important feature of Humanizer is displaying dates based on time range. For example, add the code below to the project then run dotnet run
.
TimeSpan.FromMilliseconds(2).Humanize();
TimeSpan.FromDays(1).Humanize();
TimeSpan.FromDays(16).Humanize();
Note that the Humanizer transformed the reported data into milliseconds, days and weeks:
Humanizer makes it possible to manipulate information regarding the size of data such as KB, MB and GB in a simple way. Let’s implement some functions and see how it works. So, add the code below to the project:
//3 - Formatting data size
long sizeInBytes = 1024;
var KBSize = sizeInBytes.Bytes().Humanize();
Console.WriteLine(KBSize);
sizeInBytes = 2097152;
var MBSize = sizeInBytes.Bytes().Humanize();
Console.WriteLine(MBSize);
sizeInBytes = 3221225472;
var GBSize = sizeInBytes.Bytes().Humanize();
Console.WriteLine(GBSize);
sizeInBytes = 5497558138880;
string TBBytes = sizeInBytes.Bytes().Humanize();
Console.WriteLine(TBBytes);
Now if you run the command dotnet run
you will get the following result:
In this second part of Essential NuGet Packages for Beginners, we saw five important packages that help developers create quality applications and save time by using their valuable resources.
So whenever you develop a new application and functionality, consider using NuGet packages as they can help you with almost any challenge.
]]>ASP.NET Core has established itself as one of the most popular frameworks for developing modern, scalable web apps. The ASP.NET Core ecosystem offers a multitude of resources and tools to facilitate the process of creating high-quality web applications. Among these resources, its extensibility stands out through the NuGet package management system. NuGet is a package repository that allows developers to add extra functionality to their projects simply and efficiently.
Whether for small or large projects, most companies adopt packages to facilitate and standardize the development of their systems. Some of these packages are especially essential for those who are starting their journey in web development with ASP.NET Core.
This post will introduce five essential NuGet packages for ASP.NET Core beginners. These packages were selected based on their relevance, popularity and practical usefulness in the web development process. Each of them addresses common challenges faced by beginning developers and provides powerful and efficient solutions.
We’ll explore packages that offer features like object-relational mapping (ORM), automatic documentation generation, log handling and more. For each package, we’ll discuss its core functionality, how to integrate it into the ASP.NET Core project and how to use it to enhance your web development.
Once you know these essential NuGet packages, you’ll be ready to confidently take your first steps into web development with ASP.NET Core. From here you can expand your skills and explore other packages available to suit your specific needs.
So if you’re a beginner looking to streamline your workflow and make the most of ASP.NET Core’s potential, this article is for you. Let’s dive into this exciting world of NuGet packages and find out how they can power your web development with ASP.NET Core!
For our example, we will create a minimal API to record product information from an eshop, and, throughout development, we will add NuGet packages. You can access the source code of the project here: Easy Shop source code.
To create the base application, run the command below in the terminal:
dotnet new web -o EasyShop
Now let’s create the class that will represent the main entity of our application—in this case, the product. Then inside the project create a new folder called “Models”; inside it, create a new class called “Product.cs” and replace the generated code with the code below:
namespace EasyShop.Models;
public class Product
{
public Product() { }
public Product(Guid id, string? name, string? supplier, string? category, string? quantityPerUnit, decimal? pricePerUnit, decimal? unitsInStock, bool? available)
{
Id = id;
Name = name;
Supplier = supplier;
Category = category;
QuantityPerUnit = quantityPerUnit;
PricePerUnit = pricePerUnit;
UnitsInStock = unitsInStock;
Available = available;
}
public Guid Id { get; set; }
public string? Name { get; set; }
public string? Supplier { get; set; }
public string? Category { get; set; }
public string? QuantityPerUnit { get; set; }
public decimal? PricePerUnit { get; set; }
public decimal? UnitsInStock { get; set; }
public bool? Available { get; set; }
}
Entity Framework Core (EF Core) is an object-relational mapping (ORM) framework developed by Microsoft. EF Core allows developers to access and manipulate data from a relational database using objects instead of manually writing SQL queries.
It supports multiple database providers including SQL Server, MySQL, SQLite, PostgreSQL and others, which means you can use the same EF Core API to interact with different databases.
EF Core uses the concept of “model first,” where you define your data model using classes and properties in your code. From these classes, EF Core can automatically create the database schema, execute queries and manage change tracking so that database updates are reflected in object instances in your application.
It offers features such as Language Integrated Query (LINQ) queries to database queries, relationship mapping, transaction support, concurrency control and many other useful features for working with data in a .NET application.
A wide range of companies uses EF Core due to its sophisticated range of features that simplify data access and increase productivity when dealing with database operations.
Before installing EF Core in the project, you need to have it installed globally in your development environment, so if you haven’t installed it yet, use the command below to run the global installation:
dotnet tool install --global dotnet-ef
And then to install EF Core in the project, use the command below:
dotnet add package Microsoft.EntityFrameworkCore
To communicate with EF Core, we need a class that inherits from the DbContext class, which is a class that integrates EF Core and is used to create a database session and make it available for queries and operations.
Inside the project create a new folder called “Data”; inside that, create a new class called “ProductDbContext.cs” and replace the generated code with the code below:
using EasyShop.Models;
using Microsoft.EntityFrameworkCore;
namespace EasyShop.Data;
public class ProductDbContext : DbContext
{
public ProductDbContext(DbContextOptions options) : base(options)
{
}
public DbSet<Product> Products { get; set; }
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<Product>().HasData(
new Product(Guid.NewGuid(), "Wireless Mouse ABX", "ABX", "Electronics", "1 unit", 999.99m, 100, true),
new Product(Guid.NewGuid(), "Computer Monitor FHD 1080P", "NewHD", "Electronics", "1 unit", 899.99m, 50, true),
new Product(Guid.NewGuid(), "Athletic Running Tennis Shoes", "BestShoes", "Shoes", "1 pair", 129.99m, 200, true)
);
}
}
Note that the class ProductDbContext is inheriting from the EF Core class (DbContext) and is also defining an object of type Product through the code public DbSet<Product> Products { get; set; }
and that will receive database records from the Products table. There is also the OnModelCreating()
method, which will insert some sample records when the database and tables are created.
The next step is to define the connection string and database configuration. Then in the “appsettings.json” file that is at the root of the project, add the code below:
"ConnectionStrings": {
"DefaultConnection": "DataSource=product_db.db;Cache=Shared"
},
Now in the “Program.cs” file right after where the builder variable is created, add the code below:
var connectionString = builder.Configuration.GetConnectionString("DefaultConnection");
builder.Services.AddDbContext<ProductDbContext>(x => x.UseSqlite(connectionString));
Note that in the code above we are creating a string to store the database connection string. This example uses SQLite, an embedded, open-source, serverless relational database management system (RDBMS), and it is stored in the same directory as the application. We are also doing the dependency injection configuration through the AddDbContext()
extension method.
The last step with EF Core is to run the commands that will create the database based on the Product entity. So, open a terminal and run the command below:
dotnet ef migrations add IntialModel
This command will create a folder called “Migrations” which will contain the files with the instructions for creating the database and tables.
Then run the second command in the terminal:
dotnet ef database update
This command will run the Migrations files created earlier and create the database and tables.
If everything went well, you should have a result similar to the image below:
The database access layer is ready. The next step is to expose the database records without directly exposing the entity, which in this case is the Product class. For this, it is necessary to use Data Transfer Objects (DTOs) that will share and secure the information contained in the database. But to transform the Product object into another ProductDto object, we need an object mapper, and one of the most used nowadays is AutoMapper.
AutoMappper is a library that aims to simplify the mapping process between objects of different types, allowing the developer to define custom mapping rules to automatically transfer data from one object to another.
With AutoMapper, you can avoid writing repetitive code to copy property values from one object to another. It allows you to define mapping settings in one place and then use those settings to perform mapping automatically.
To download AutoMapper in the project, use the command below in the terminal:
dotnet add package AutoMapper
Now inside the “Models” folder, create a new folder called “Dtos”; inside it, create a new class called “ProductDto.cs” and replace the existing code with the code below:
namespace EasyShop.Models.Dtos;
public class ProductDto
{
public Guid? Id { get; set; }
public string? Identifier { get; set; }
public string? Seller { get; set; }
public string? Category { get; set; }
public string? QuantityPerUnit { get; set; }
public decimal? PricePerUnit { get; set; }
public decimal? UnitsInStock { get; set; }
public bool? Available { get; set; }
}
The ProductDto class will be used to return data when requested. Note that the ProductDto class has the same properties as the Product class because in this context all data will be returned. But imagine that there were sensitive data such as emails and addresses, among others. In that case, it would be essential to use a DTO to not expose this information openly. For best practice, always avoid exposing the database entity directly without the use of a DTO.
The next step is to configure the classes that will be mapped, informing which value each property will receive. For this, it is common to use the profiles pattern, where we create a class to execute the configurations. Inside the “Dtos” folder, create a new folder called “Profiles” and, inside that folder, create a new class called “ProductProfile.cs” and replace the existing code with the code below:
namespace EasyShop.Models.Dtos.Profiles;
using AutoMapper;
using EasyShop.Models.Dtos;
using EasyShop.Models;
public class ProductProfile : Profile
{
public ProductProfile()
{
CreateMap<Product, ProductDto>()
.ForMember(des => des.Identifier, opt => opt.MapFrom(src => src.Name))
.ForMember(des => des.Seller, opt => opt.MapFrom(src => src.Supplier))
.ReverseMap();
}
}
Note that in the code above, the “ProductProfile” class inherits from the “Profile” AutoMapper class and, through the constructor, defines the mapping of the properties of the Product and Product Dto classes. It is important to highlight that this configuration is done only once, each time. If the mapping is necessary, we can just use the AutoMapper resources. In addition, properties that have the same names in both classes, such as “Category,” will be mapped automatically, without the need to declare the mapping.
Another resource used is “.ReverseMap()” which will do the reverse mapping, from the Dto class to the base class.
To make use of object mapping, in the root of the project create a new folder called “Services” and inside it create a new class called “ProductService.cs” and replace the existing code with the code below:
using EasyShop.Data;
using EasyShop.Models.Dtos;
using EasyShop.Models;
using Microsoft.EntityFrameworkCore;
using AutoMapper;
using Serilog;
namespace EasyShop.Services;
public class ProductService
{
private readonly ProductDbContext _db;
private readonly IMapper _mapper;
public ProductService(ProductDbContext db, IMapper mapper)
{
_db = db;
_mapper = mapper;
}
public async Task<List<ProductDto>> FindAll()
{
var products = await _db.Products.ToListAsync();
var productsDto = _mapper.Map<IEnumerable<ProductDto>>(products).ToList();
Log.Logger = new LoggerConfiguration()
.MinimumLevel.Debug()
.WriteTo.Console()
.WriteTo.File("logs/product_service_log.txt", rollingInterval: RollingInterval.Day)
.CreateLogger();
Log.Information("Total products quantity: {count}", productsDto.Count());
Log.CloseAndFlush();
return productsDto;
}
public async Task<ProductDto> FindById(Guid id)
{
var product = await _db.Products.FirstOrDefaultAsync(p => p.Id == id);
var productDto = _mapper.Map<ProductDto>(product);
return productDto;
}
public async Task<Guid> Create(CreateUpdateProduct productDto)
{
var productEntity = new Product(Guid.NewGuid(),
productDto.Identifier,
productDto.Seller,
productDto.Category,
productDto.QuantityPerUnit,
productDto.PricePerUnit,
productDto.UnitsInStock,
productDto.Available);
await _db.AddAsync(productEntity);
await _db.SaveChangesAsync();
return productEntity.Id;
}
public async Task Update(CreateUpdateProduct productDto, Guid id)
{
var productEntity = await _db.Products.SingleOrDefaultAsync(t => t.Id == id);
productEntity.Name = productDto.Identifier;
productEntity.Supplier = productDto.Seller;
productEntity.Category = productDto.Category;
productEntity.QuantityPerUnit = productDto.QuantityPerUnit;
productEntity.PricePerUnit = productDto.PricePerUnit;
productEntity.UnitsInStock = productDto.UnitsInStock;
_db.Update(productEntity);
await _db.SaveChangesAsync();
}
public async Task Delete(Guid id)
{
var productEntity = await _db.Products.SingleOrDefaultAsync(t => t.Id == id);
_db.Remove(productEntity);
await _db.SaveChangesAsync();
}
}
Note that the ProductService class is doing the dependency injection of the IMapper interface and mapping the Product entity to the DTO class through the code Map<IEnumerable<ProductDto>>(products).ToList();
.
Thus, every time it is necessary to map objects, just invoke the Map method and pass the classes to be mapped as a parameter. Otherwise, we would have to do mapping manually, which could be done as follows:
var productsDto = new List<ProductDto>();
foreach (var item in products)
{
var productDto = new ProductDto(item.Id, item.Name, item.Supplier, item.Category, item.QuantityPerUnit, item.PricePerUnit, item.UnitsInStock, item.Available);
productsDto.Add(productDto);
}
Imagine the time it would take to create manual mappings dozens of times, plus the repetitive code that would be created. That’s why AutoMapper is a great tool to streamline and simplify mapping processes.
Finally, a configuration is missing for the mapping to work, so in the Program.cs file add the code snippet below:
builder.Services.AddAutoMapper(AppDomain.CurrentDomain.GetAssemblies());
Now let’s make the API functional by adding the endpoints. Still in the Program.cs file just above the code snippet app.Run();
add the code below:
app.MapGet("/v1/products", async (ProductService service) =>
{
var products = await service.FindAll();
return products.Any() ? Results.Ok(products) : Results.NotFound();
});
app.MapGet("/v1/products/{id}", async (ProductService service, Guid id) =>
{
var product = await service.FindById(id);
return product is not null ? Results.Ok(product) : Results.NotFound();
});
To run the application, just type the command dotnet run
in the terminal and access the address http://localhost:PORT/v1/products
in the browser. You should have something similar to the GIF below:
The next step is to register the processes executed by the application so that it is possible to monitor and analyze the information.
NuGet Serilog package is a logging library for .NET applications that provides a flexible and extensible approach to event logging, allowing developers to capture information on application behavior.
Serilog supports various ways of outputting logs such as text files, databases, cloud storage services and even integration with real-time monitoring systems. It also supports advanced features such as filtering based on logs, custom formatting, enriching logs with contextual information and support for external logging providers.
Run the commands below to download Serilog and its dependencies into the project:
dotnet add package Serilog
dotnet add package Serilog.Extensions.Logging
dotnet add package Serilog.Sinks.Console
dotnet add package Serilog.Sinks.File
To configure the logs is very simple. In the ProductService class in the FindAll()
method, just below where the productsDto variable is created, add the following code:
Log.Logger = new LoggerConfiguration().MinimumLevel.Debug()
.WriteTo.Console()
.WriteTo.File("logs/product_service_log.txt", rollingInterval: RollingInterval.Day)
.CreateLogger();
Log.Information("Total products quantity: {count}", productsDto.Count());
Log.CloseAndFlush();
The above code is creating a new instance of the LoggerConfiguration
class, which is the base class of Serilog. This class is using some extension methods to do the log level settings—.MinimumLevel.Debug()
—in which places these logs will be written: .WriteTo.Console()
and .WriteTo.File("logs/product_service_log.txt", rollingInterval: RollingInterval.Day)
. Note that the logs will be stored in the application’s console and in text inside the logs folder, which will be created after the first run.
We also declare a method with the information to be recorded, which in this case will be the total number of products found in the database.
To test it, just run the command dotnet run
in the console and access the address http://localhost:PORT/v1/products
.
By doing this, the console where the application is running will display the log message and a folder called “logs” will be created at the root of the application and a file with the information will be generated, as in the image below:
In this post, only some features of Serilog were addressed, but there are several others, so feel free to explore them.
The next step is to add validation to our API input data. To do this, we will use FluentValidation.
FluentValidation is a data validation library for .NET, which allows validations in an easy and fluent way by creating rules in a declarative way.
With FluentValidation, you can easily define complex validation rules for object properties, such as validating required values, minimum or maximum string length, email formats, numbers within certain ranges and more.
FluentValidation is widely used in the .NET community and is supported by most .NET versions.
To download FluentValidation in the project, use the commands below:
dotnet add package FluentValidation
dotnet add package FluentValidation.DependencyInjectionExtensions
So far we’ve only created methods to return data from the database. To validate with FluentValidation, we need to create methods to insert data.
For this, we will create a new DTO called “CreateUpdateProduct” that will be used to create and update records.
Inside the “Dtos” folder, create a new file called “CreateUpdateProduct.cs” and put the code below in it:
namespace EasyShop.Models.Dtos;
public record CreateUpdateProduct(string? Identifier, string? Seller, string? Category, string? QuantityPerUnit, decimal? PricePerUnit, decimal? UnitsInStock, bool Available);
Now, in the root of the project, create a new folder called “Validators” and inside it, create a new class called “ProductValidator” and put the code below in it.
using FluentValidation;
using EasyShop.Models.Dtos;
namespace EasyShop.Validators;
public class ProductValidator : AbstractValidator<CreateUpdateProduct>
{
public ProductValidator()
{
RuleFor(product => product.Identifier).NotEmpty().WithMessage("Identifier is required.");
RuleFor(product => product.Seller).NotEmpty().WithMessage("Seller is required.");
RuleFor(product => product.Category).NotEmpty().WithMessage("Category is required.");
RuleFor(product => product.QuantityPerUnit).NotEmpty().WithMessage("QuantityPerUnit is required.");
RuleFor(product => product.PricePerUnit).NotEmpty().WithMessage("PricePerUnit is required.");
RuleFor(product => product.UnitsInStock).NotEmpty().WithMessage("UnitsInStock is required.");
}
}
Note that in the code above we are passing the class to be validated to the “AbstractValidator” class. Then within the constructor we are creating validation methods for each of the properties that are mandatory, where, if they are null or empty, a message of error says they are mandatory.
Here we are just doing simple validations, but FluentValidation has resources for the most varied types of validation.
Next, let’s create a method to insert a new product. In the “ProductService.cs” class add the code below:
public async Task<Guid> Create(CreateUpdateProduct productDto)
{
var productEntity = new Product(Guid.NewGuid(),
productDto.Name,
productDto.Supplier,
productDto.Category,
productDto.QuantityPerUnit,
productDto.PricePerUnit,
productDto.UnitsInStock,
productDto.Available);
await _db.AddAsync(productEntity);
await _db.SaveChangesAsync();
return productEntity.Id;
}
The next step is to add the FluentValidation settings and the new endpoint in the Program class. In the “Program.cs” file just below where the “AddAutoMapper” configuration is made, add the line of code below:
builder.Services.AddScoped<IValidator<CreateUpdateProduct>, ProductValidator>();
And below the Get endpoints add the new endpoint:
app.MapPost("/v1/products", async (ProductService service, CreateUpdateProduct product, IValidator<CreateUpdateProduct> validator) =>
{
var validationResult = validator.Validate(product);
if (!validationResult.IsValid)
return Results.BadRequest(validationResult.Errors);
var resultId = await service.Create(product);
return Results.Created($"/v1/product/{resultId}", product);
});
Note that in the code above we are passing the product received in the request to the Validate
method that will carry out the validations and return a FluentValidation object. If the “IsValid” property is false, a “BadRequest” will be returned with the errors; if true, the record will be created in the database.
Now let’s test the validation we just implemented. In this tutorial, Fiddler Everywhere will be used to make requests to the API.
Create a new request for the route: http://localhost:PORT/v1/products
and send the following JSON in the body:
{
"name": "",
"supplier": "",
"category": "",
"quantityPerUnit": "",
"pricePerUnit": 0,
"unitsInStock": 0,
"available": true
}
The result will be a 400 Bad Request status, and the validation errors will be displayed in the body of the response, as shown in the image below:
Swagger is an open-source tool used to create, document and test RESTful APIs. It provides an interactive interface where you can access and interact with API endpoints directly in your browser. Additionally, Swagger automatically generates API documentation based on source code attributes and comments.
In the context of ASP.NET Core, Swagger is commonly used to document APIs, allowing developers to easily and intuitively visualize and test the API.
To download Swagger packages, use the commands below:
dotnet add package Swashbuckle.AspNetCore
dotnet add package Swashbuckle.AspNetCore.SwaggerGen
dotnet add package Swashbuckle.AspNetCore.SwaggerUI
Now let’s create a controller that will contain a get endpoint where the comments will be added. In the root of the project, create a new folder called “Controllers” and inside it create a new class called “ProductController.cs” and put the code below in it:
using EasyShop.Models.Dtos;
using EasyShop.Services;
using Microsoft.AspNetCore.Mvc;
namespace EasyShop.Endpoints;
[ApiController]
[Route("[controller]")]
public class ProductController : ControllerBase
{
private readonly ProductService _service;
public ProductController(ProductService service)
{
_service = service;
}
/// <summary>
/// Complete list of products
/// </summary>
/// <returns>List of products</returns>
/// <response code="200">Returns the complete list of products</response>
[HttpGet(Name = "GetProducts")]
public async Task<ActionResult<List<ProductDto>>> Get()
{
var products = await _service.FindAll();
return Ok(products);
}
}
Note that in the above code, we are telling Swagger to read the code comments from an XML file.
To generate the comments in the file, open the “EasyShop.csproj” file and add the code below:
<PropertyGroup>
<GenerateDocumentationFile>true</GenerateDocumentationFile>
<NoWarn>$(NoWarn);1591</NoWarn>
</PropertyGroup>
The <NoWarn>
setting is set so that the IDE does not display a warning about codes without documentation comments.
The next step is to add the Swagger settings so in the Program.cs file. Just above where the “app” variable is created, add the following code:
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "EasyShop", Description = "EasyShop", Version = "v1" });
var xmlFile = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml";
var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
c.IncludeXmlComments(xmlPath);
});
builder.Services.AddControllers()
And below the “app” variable add the following code:
app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json", "EasyShop V1");
});
app.MapControllers();
Now run the command dotnet run
in the terminal and access http://localhost:PORT/swagger/index.html
in the browser to display the Swagger interface.
Note in the GIF below how it is possible to execute requests directly through Swagger. And if you run the GET - Product
route through Swagger, you can see the comments appearing as documentation in the Swagger interface.
As we saw throughout the post, NuGet packages are extremely useful to facilitate and accelerate the development of web applications—so it is vital that beginners know which are the most important ones and how to use them.
In this first part, we saw five packages normally found in small and large ASP.NET Core projects and how to implement them in practice. The second part will show five more packages so you can create a complete application and gain incredible knowledge to face the challenges of a developer’s day.
]]>We are happy to announce that, coinciding with the official release day of .NET 8, the Telerik UI for Blazor and Telerik UI ASP.NET Core libraries are fully compatible with the newly released framework. This alignment allows developers to immediately utilize the advanced capabilities and performance benefits of .NET 8, while continuing to enjoy the rich set of UI components Progress Telerik provides. Let’s explore some of the key .NET 8 new features together.
For a deep-dive into .NET 8 join us in a dedicated webinar on December 13: Discover the Magic of .NET 8 and Beyond.
Blazor’s new rendering modes provide more control over how UI updates are handled, allowing developers to select the most suitable rendering strategy for their application:
Static Server Rendering (SSR)
Static render mode is the default setting for all components, which results in the component being rendered to the response stream, and does not enable interactivity.
Efficient for apps that don’t require client-side interactivity, as it minimizes the amount of JavaScript needed, improving load times and SEO. In the provided example, the component’s rendering mode isn’t explicitly set, so it adopts the default behavior from its parent context. As a result, the component is statically rendered server-side.
In the below example, the button isn’t interactive and does not call the OnClickHandler
method when selected.
Example:
@page "/render-mode-ssr"
<TelerikButton OnClick="@OnClickHandler">Hello!</TelerikButton>
@result
@code {
private string result;
private async Task OnClickHandler()
{
result = DateTime.Now.ToString();
}
}
Interactive Server Rendering
The Server render mode renders the component interactively from the server using Blazor Server. This mode handles user interactions over a real-time connection with the browser, and the circuit connection is established when the Server component is first rendered.
It is ideal for apps with interactive UIs but also needing server-side logic. This mode can help minimize client resources usage while still providing a rich interactive experience.
Example:
@page "/render-mode-interactive-server"
@rendermode RenderMode.InteractiveServer
<TelerikButton OnClick="@OnClickHandler">Hello!</TelerikButton>
@result
@code {
private string result;
private async Task OnClickHandler()
{
result = DateTime.Now.ToString();
}
}
Interactive WebAssembly Rendering
The WebAssembly render mode operates by interactively rendering the component on the client with Blazor WebAssembly. The .NET runtime and the application’s bundle are fetched and cached at the initial rendering of the WebAssembly component.
This render mode is well suited for apps with complex client-side logic, allowing for a rich interactive user experience while leveraging client resources for rendering.
Example:
@page "/render-mode-interactive-client"
@rendermode RenderMode.InteractiveWebAssembly
<TelerikButton OnClick="@OnClickHandler">Hello!</TelerikButton>
@result
@code {
private string result;
private async Task OnClickHandler()
{
result = DateTime.Now.ToString();
}
}
Auto Render Mode
Auto Render Mode dynamically chooses the rendering method at runtime, initially using Blazor Server for server-side rendering, and transitioning to Blazor WebAssembly for client-side rendering on subsequent visits, after client-side resources have been cached. This adaptive rendering approach optimizes the initial load time and enhances interactivity on subsequent visits without requiring developers to decide upfront which rendering model to use.
Example:
@page "/render-mode-auto"
@rendermode RenderMode.InteractiveAuto
<TelerikButton OnClick="@OnClickHandler">Hello!</TelerikButton>
@result
@code {
private string result;
private async Task OnClickHandler()
{
result = DateTime.Now.ToString();
}
}
For more information, take a look at ASP.NET Core Blazor render modes.
The new unified project template consolidates Blazor Server and Blazor WebAssembly hosting models, simplifying the setup and development process. It also introduces static server rendering and streaming rendering, which allows for incremental content delivery, improving perceived load times and user experience.
If you want to learn more about the new Blazor Web App template, check the following articles:
.NET 8 introduces the ability to render Razor components outside of ASP.NET Core. This feature provides more flexibility by allowing developers to use Razor components in scenarios where ASP.NET Core isn’t used, such as in console applications or other .NET workloads, expanding the utility of Razor components beyond traditional web applications.
.NET 8 brings a series of technical improvements to ASP.NET Core. There’s a new metrics API for detailed performance tracking, and SHA-3 hashing extends cryptographic options. HttpClient now supports HTTPS proxies for secure, private communications. Stream-based ZipFile methods enhance file handling without relying on disk storage. A source generator for options validation reduces startup time, and updated LoggerMessageAttribute constructors offer more logging flexibility.
Additionally, there are performance-focused updates with new tensor operation APIs for AI and machine learning applications, showcasing .NET’s commitment to high-performance computing.
These enhancements contribute to the robustness, performance and security of applications developed with ASP.NET Core in .NET 8. For a comprehensive overview, please take a look at the official .NET 8 documentation.
Telerik UI for Blazor and Telerik UI for ASP.NET Core are set to fully support the upcoming .NET 9 framework, ensuring that developers can leverage the latest enhancements in their web projects.
Blazor developers can look forward to a new collection of components, including a multifunctional Spreadsheet component, DockManager component for advanced layout management, and PopUp/PopOver component for enriched user interactions.
For ASP.NET Core, the emphasis will be on assisting those transitioning from desktop to web development, with components like the PropertyGrid making the shift more intuitive.
Performance is a key focus as well. Expect to see improvements in DataGrids across both UI libraries, aimed at delivering faster data handling and rendering. This step is a part of our ongoing effort to maintain Telerik UI’s leading edge in the world of UI component libraries.
There’s more on the horizon—stay updated with our developments by keeping tabs on our public roadmap.
We are grateful for your ongoing support and invite you to continue sharing your suggestions and requests via the dedicated Feedback Portal. Let’s craft the future direction of Telerik UI together.
Join us on December 13 at 11:00 am ET for the .NET 8 webinar and get up to date with the .NET 8 journey so far and all the hot news in the .NET world across the web, mobile, cross-platform and desktop.
]]>