<rss version="2.0">
  <channel>
    <title>Coding Craft</title>
    <link>https://www.davidpuplava.com/</link>
    <description><![CDATA[Learning by doing Game Dev, C#, Orchard Core and other technology things.]]></description>
    <item>
      <title>Refactor with .NET Aspire</title>
      <link>https://www.davidpuplava.com/refactor-with-net-aspire</link>
      <description><![CDATA[<h2>Goal</h2>
<p>In <a href="https://www.davidpuplava.com/fun-with-net-aspire">my last post</a>, I had an existing web application that I wanted to add .NET Aspire to help with orchestrating external dependencies.</p>
<p>In particular, my web applicaiton is a LLM chat application that relies on Ollama for chat completions.</p>
<p>For all intents and purposes, Ollama is it's own microservice that my web application consumes and communicates with through it's REST API.</p>
<p>In this post, I show how I can add Ollama to my solution so that my web application has a reliable and consistent developer experience.</p>
<h2>From Last Post</h2>
<p>Here is a quick recap of <a href="https://www.davidpuplava.com/fun-with-net-aspire">the last post</a>. If you don't need a refresher, skip to the next section.</p>
<p>To my existing ASP.NET MVC Web application solution, I:</p>
<ul>
<li>added a .NET Aspire AppHost project using the project template</li>
<li>added a .NET Aspire ServiceDefaults project using the project template</li>
<li>configured my web applicaiton to use  <code>builder.AddServiceDefaults();</code></li>
<li>added <code>http</code> and <code>https</code> entries to my <code>launchSettings.json</code> file in web application</li>
</ul>
<p>For specific details, please check out my last post <a href="https://www.davidpuplava.com/fun-with-net-aspire">here</a>.</p>
<h2>Manually Running Ollama</h2>
<p>For chat completions, my web application relies on <a href="https://ollama.com/">Ollama</a> server running and configured to serve up a local LLM model.</p>
<p>In particular, my web application uses the <code>OllamaApiClient</code> type for the <code>OllamaSharp</code> library to communicate (see GitHub for it <a href="https://github.com/ollama/ollama">here</a>) with an Ollama server over HTTP.</p>
<p><img class="w-100" src="/media/refactor-net-aspire/Screenshot%202025-07-21%20163759.png"></p>
<p>The code for creating the client is here.</p>
<pre><code class="language-csharp">IChatClient chatClient = new OllamaApiClient(new Uri("http://localhost:11435/"));
IChatClient summarizeClient = new OllamaApiClient(new Uri("http://localhost:11435/"));
</code></pre>
<p>For this web application, I use separate clients for chat completion and summarizing. The summaryizeClient is used to construct a title for new chat conversations.</p>
<p>Check out this <a href="https://www.davidpuplava.com/migrate-microsoft_extensions_ai_ollama-to-ollamasharp">other post</a> of mine that discusses changing my web application from using the <code>Microsoft.Extensions.AI.Ollama</code> library to using <code>OllamaSharp</code>.</p>
<p>As you can see, an API client is instantiated for Ollama running at the following REST api address <code>http://localhost:11435</code>.</p>
<blockquote>
<p><strong>Note</strong> the default port for Ollama's REST API is <strong>11434</strong> but here I use <strong>11435</strong> for reasons you'll see shortly</p>
</blockquote>
<p>To get this to work, I would have to ensure that the Ollama server is running on my machine and that the underlying model was pulled.</p>
<p>Sometimes, I'd forget and get a runtime error when trying to use my chat application.</p>
<p>The better way is to let .NET Aspire run Ollama for us.</p>
<h2>Configure .NET Aspire to Run Ollama</h2>
<p>With .NET Aspire part of my web application solution, I can add Ollama as a dependent resource as follows.</p>
<p>To my <code>AppHost</code> project, I add the <code>CommunityToolkit.Aspire.Hosting.Ollama</code> <a href="https://www.nuget.org/packages/CommunityToolkit.Aspire.Hosting.Ollama">Nuget package</a>.</p>
<p><img class="w-100" src="/media/refactor-net-aspire/Screenshot%202025-07-21%20165425.png"></p>
<p>In <code>AppHost</code> project, this is the current state of the <code>Program.cs</code> file.</p>
<p><img class="w-100" src="/media/refactor-net-aspire/Screenshot%202025-07-21%20165816.png"></p>
<p>In <code>AppHost</code> project, I can then modify the <code>Program.cs</code> file to add Ollama as a resource.</p>
<pre><code>const int ollamaPort = 11435;
const string OllamaModelName = "llama3";

var ollama = builder.AddOllama("Ollama", ollamaPort)
    .WithDataVolume();

ollama.AddModel(OllamaModelName);
</code></pre>
<p>This code defines constants for a custom Ollama port and specific model that Ollama should use.</p>
<p>It then chains together a call to <code>.AddOllama(...)</code> extensions method (from the CommunityToolkit library) to add the resource with that custom port and data volume.</p>
<p>And lastly, the code calls <code>.AddModel(...)</code> to tell the <code>Ollama</code> resource to use <code>llama3</code>.</p>
<p>The <code>AppHost</code> <code>Program.cs</code> file looks like this.</p>
<p><img class="w-100" src="/media/refactor-net-aspire/Screenshot%202025-07-21%20170821.png"></p>
<p>You can now run the application and see that the Ollama server resource along with the <code>llama3</code> model are available in the resource dashboard.</p>
<p><img class="w-100" src="/media/refactor-net-aspire/Screenshot%202025-07-21%20172316.png"></p>
<p>One last bit here, is to add a call to <code>.WaitFor(...)</code> on the <code>ollama</code> resource so that my web application will wait until <code>Ollama</code> is available before it starts up. This ensures that no race conditions happen where the my web application is ready but Ollama is still starting up in the backgroun.</p>
<pre><code>builder.AddProject&lt;Sidekicks_MultiApp_OCWeb&gt;("SidekicksOCWeb")
    .WaitFor(ollama);
</code></pre>
<p><img class="w-100" src="/media/refactor-net-aspire/Screenshot%202025-07-21%20172808.png"></p>
<h2>Next Steps</h2>
<p>So far eveything looks good, but the application would fail if I deployed to server because my web application is hardcoded to use <code>localhost</code> when calling Ollama.</p>
<p>.NET Aspire provides a way to avoid hard coding this reference which I will post about next time.</p>
<p>Until then, keep coding.</p>
<hr>
]]></description>
      <pubDate>Tue, 22 Jul 2025 16:52:03 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/refactor-with-net-aspire</guid>
    </item>
    <item>
      <title>Fun with .NET Aspire</title>
      <link>https://www.davidpuplava.com/fun-with-net-aspire</link>
      <description><![CDATA[<h2>Background</h2>
<p>.NET Aspire is pretty cool.</p>
<p>Riding the wave of AI hype, I decided to write my own AI Chat completion web application.</p>
<p>Microsoft did a <a href="https://learn.microsoft.com/en-us/dotnet/ai/quickstarts/build-chat-app?pivots=openai">great job documenting</a> how you can get up and running with your own AI chat application in a few dozen lines of code.</p>
<p>Personally, though, I don't like feeding the beast of a cloud based LLM like OpenAI or Azure OpenAI because writing (to me) is actualized thinking and I'm not about to send my thoughts to a 3rd party.</p>
<p>So, long story, short, I wanted to run a local LLM where I know the messages I send to a chat completion services stay with me.</p>
<h2>Local LLM Chat</h2>
<p>Luckily, Microsoft is quick to understand this and provides <a href="https://learn.microsoft.com/en-us/dotnet/ai/quickstarts/chat-local-model">great guidance</a> on how to build a chat app with a local LLM.</p>
<p>The linked tutorial from Microsoft has you running a 3rd party thing called <a href="https://ollama.com/">Ollama</a> which provides a RESTful API to connect to a LLM running on your machine.</p>
<p>Ollama is a great tool for finding, downloading and running local LLM models.</p>
<p>They even have a docker container. And as long as it's running I can point my chat app to it and my chat mesages stay local.</p>
<h2>Opinionated Orchestration</h2>
<p>But writing software that depends on a separate 3rd party running service is a pain.</p>
<p>Enter .NET Aspire, a local dev orchestration framework for cloud native / microservice based systems to improve the developer experience.</p>
<p>In a way, you can think of Ollama as a microservice that my local chat application can utilize.</p>
<p>So adding .NET Aspire to my project is a natural next step.</p>
<p>It is super easy.</p>
<h2>Implementation</h2>
<p>Start by adding the .NET Aspire App Host project to your solution.
<img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-18%20234236.png"></p>
<p>You'll see the AppHost project added to your solution with a <code>Program.cs</code> file you can use to start orchestrating parts of your app.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-18%20235128.png"></p>
<p>First, be sure to add a project reference to your web project in your AppHost project.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20083411.png"></p>
<p>Open the <code>Program.cs</code> file in the AppHost project, and add a reference to your project.</p>
<p>In my chat app, <code>SidekicksOCWeb</code> is the project I want to start up with the .NET Aspire AppHost runs.</p>
<p>The fully qualified name is <code>Sidekicks.MultiApp.OCWeb</code>.
<img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-18%20235508.png"></p>
<p>A cool thing that Visual Studio intellisense does is give you a type name for your project. My web project is named <code>Sidekicks.MultiApp.OCWeb</code> and .NET Aspire generates a type <code>Sidekicks_MultiApp_OCWeb</code> (as seen above) so it is easily referenced.</p>
<p>Here is the code for the <code>Program.cs</code> file of the AppHost project.</p>
<pre><code>using Projects;

var builder = DistributedApplication.CreateBuilder(args);

builder.AddProject&lt;Sidekicks_MultiApp_OCWeb&gt;("SidekicksOCWeb");

var app = builder.Build();

await app.RunAsync();
</code></pre>
<p>And be sure to add the reference to your web project from your AppHost project.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20083411.png"></p>
<h2>Run it</h2>
<p>Now is a good time to check that it's all working.</p>
<p>Set your start up project to be your AppHost project. One way to do this is to right-click your AppHost project and select select as startup. Another way is to select your AppHost project from the startup project dropdown.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20083915.png"></p>
<p>Now go ahead and run your app.</p>
<p>The first thing you'll see is the .NET Aspire startup log console window.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20084207.png"></p>
<p>You should also see the .NET Aspire Dashboard open in a browser as well.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20091830.png"></p>
<p>The Resources tab shows you your web project and any other resources you have for your integrated system.</p>
<p>Go ahead and click through all the stuff in the .NET Aspire Dashboard.</p>
<p>The Console tab gives you logging from your web app.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20091950.png"></p>
<p>The Structured, Traces and Metrics tabs look to be empty though. So let's go ahead and fix that.</p>
<h2>.NET Aspire ServiceDefaults</h2>
<p>Stop your application and add a new project to your solution.</p>
<p>Use the .NET Aspire ServiceDefaults project template.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20092618.png"></p>
<p>You'll get a new project with a single static type called Extensions that you can use in your web application project.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20092811.png"></p>
<p>Reference the <code>ServiceDefaults</code> project from your web application project.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20092938.png"></p>
<p>Then in your <code>Program.cs</code> file, just add <code>builder.AddServiceDefaults()</code> right after your <code>WebApplication.CreateBuilder(app)</code> call like so.</p>
<pre><code>var builder = WebApplication.CreateBuilder(args);
builder.AddServiceDefaults();
</code></pre>
<p>Go ahead an run your application now and see that Structured logs tab is now populating with information.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20093358.png"></p>
<p>Out of the box!</p>
<p>Then, as you start sending web requests to your web applicaiton, the Traces tab will fill in with information about your web traffic calls.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20093559.png"></p>
<p>In the Metrics tab, select your web project resource and see the rich information there as well.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20093817.png"></p>
<h2>Resource URLs</h2>
<p>One last thing to do is to set the URL for your project resource. In my case it's <code>SidekicksOCWeb</code>.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20151102.png"></p>
<p>See how the <code>URLs</code> column is empty.</p>
<p>I tried many different searches for "how to display my web project url in .NET Aspire dashboard" or ".NET Aspire dashboard missing web project's URL" but nothing came up.</p>
<p>Nevertheless, I was able to compare a working Sample to my application and figured out that I need to do something in my web application project to get the URLs to show up.</p>
<p>Navigate to you web projects <code>Properties-&gt;launchSettings.json</code> file, and add entries for <code>http</code> and <code>https</code> if you don't have them.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20151612.png"></p>
<p>Mine originally looked like this.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20151557.png"></p>
<p>And here are the missing entries for <code>http</code> and <code>https</code>, ensuring that the <code>applicationUrl</code> value is my web application's actual launch URLs.</p>
<pre><code>    "http": {
     "commandName": "Project",
     "dotnetRunMessages": true,
     "launchBrowser": true,
     "applicationUrl": "http://localhost:5000",
     "environmentVariables": {
       "ASPNETCORE_ENVIRONMENT": "Development"
     }
   },
   "https": {
     "commandName": "Project",
     "dotnetRunMessages": true,
     "launchBrowser": true,
     "applicationUrl": "https://localhost:5001;http://localhost:5000",
     "environmentVariables": {
       "ASPNETCORE_ENVIRONMENT": "Development"
     }
   }
</code></pre>
<p>The updated <code>launchSettings.json</code> file looks like this.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20152126.png"></p>
<p>After making that change, go ahead and run the solution and you'll see that the URLs are now correctly populated in the .NET Aspire Dashboard.</p>
<p><img class="w-100" src="/media/fun-aspire/Screenshot%202025-07-21%20152310.png"></p>
<h2>Recap, Next Steps</h2>
<p>And that is all there is to adding .NET Aspire to your existing web application.</p>
<p>Next, I'll show how to add an Ollama resource to .NET Aspire and then call it from my existing web application.</p>
<p>This is where .NET Aspire really shines.</p>
<p>Stay tuned.</p>
<hr>
]]></description>
      <pubDate>Tue, 22 Jul 2025 16:52:17 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/fun-with-net-aspire</guid>
    </item>
    <item>
      <title>Migrate Microsoft.Extensions.AI.Ollama to OllamaSharp </title>
      <link>https://www.davidpuplava.com/migrate-microsoft_extensions_ai_ollama-to-ollamasharp</link>
      <description><![CDATA[<p>AI is moving at lighting speed.</p>
<p>Things change in a matter of days or weeks not months.</p>
<p>As a pet project, I created my own chat assistant application utilizing Microsoft technologies and Ollama for local LLM model chats.</p>
<p>I created it a few months ago when a number of the nuget packages were in preview. Today I went to update the packages and most of several minor versions ahead.</p>
<p>I blindly updated to the latest versions of the packages and now I have build errors.</p>
<p>The first issue was with a project that references Microsoft.Extensions.AI.Ollama, specifically the <code>9.7.0-preview.1.25356.2</code> which is deprecated and <a href="https://www.nuget.org/packages/Microsoft.Extensions.AI.Ollama">Microsoft recommends using OllamaSharp as an alternative</a>.</p>
<p>Went ahead and uninstalled <code>Microsoft.Extensions.AI.Ollama</code> package and installed <code>OllamaSharp</code>, but now I have build errors.</p>
<p><img src="/media/migrate-ai-ollama/Screenshot%202025-07-17%20222738.png"></p>
<p>Checked the OllamaSharp documentation on GitHub <a href="https://github.com/awaescher/OllamaSharp">here</a> to see I needs to change a few things.</p>
<p>The client instantiation type changes from <code>OllamaChatClient</code> to <code>OllamaApiClient</code>.</p>
<pre><code> //IChatClient summarizeClient = new OllamaChatClient(new Uri("http://localhost:11435/"));
IChatClient summarizeClient = new OllamaApiClient(new Uri("http://localhost:11435/"));
</code></pre>
<p>Then, the method call to get the chat completion changed from <code>CompleteAsync</code> to `</p>
<pre><code>//var name = await summarizeClient.CompleteAsync(summarizeHistory, summarizeChatOptions, cancellationToken: responseCancellationToken);
var name = await summarizeClient.GetResponseAsync(summarizeHistory, summarizeChatOptions, cancellationToken: responseCancellationToken);
</code></pre>
<p>The next compiler error is related to the Chat completion itself.
<img src="/media/migrate-ai-ollama/Screenshot%202025-07-17%20222947.png"></p>
<p>Fix by accessing the <code>.Text</code> property on the response itself rather than going through the <code>.Message</code> property.</p>
<pre><code> //currentConversation.Name = name.Message.Text;
 currentConversation.Name = name.Text;
</code></pre>
<p><img src="/media/migrate-ai-ollama/Screenshot%202025-07-17%20223146.png"></p>
<p>Of course, the most important change being to the streaming responses itself to replace <code>.CompleteStreamingAsync(...)</code> method.</p>
<pre><code>//await foreach (var item in chatClient.CompleteStreamingAsync(chatHistory, chatOptions, cancellationToken: responseCancellationToken))
await foreach (var item in chatClient.GetStreamingResponseAsync(chatHistory, chatOptions, cancellationToken: responseCancellationToken))

</code></pre>
<p><img src="/media/migrate-ai-ollama/Screenshot%202025-07-17%20223347.png"></p>
<p>So far, so good - but the problem is with assign to the <code>.Text</code> property of the item in that list. For this particular implementation, it happens to be a <code>ChatMessage</code> type from the <code>Microsoft.Extensions.AI.Abstractions</code> library.</p>
<p>The <code>.Text</code> property used to be writable but now it is readonly.</p>
<p>Turns out that this code is a relic of a prior design that is no longer necessary. The only purpose for appending to the <code>.Text</code> property was so that I could use it to construct the <code>MarkupString</code> on the next line.</p>
<p>Analyzing the code even more, I am constant querying my dataStructures looking for the latest <code>Assistant</code> message to get the index simply to align with the index of the messageHistory array.</p>
<p>That is not necessary now. So let's refactor.</p>
<pre><code>var lastMessage = messageHistory.LastOrDefault(x =&gt; x.Role == "assistant");
int index = messageHistory.IndexOf(lastMessage!);
string response = "";
await foreach (var item in chatClient.GetStreamingResponseAsync(chatHistory, chatOptions, cancellationToken: responseCancellationToken))
{
		response += item.Text;                
		messageHistory[index].Markup = new MarkupString(Markdig.Markdown.ToHtml(markdown: response, pipeline: MarkdownPipeline));

		this.StateHasChanged();

		if (responseCancellationToken.IsCancellationRequested)
		{
				break;
		}
}
chatHistory.Add(new ChatMessage(ChatRole.Assistant, response));
</code></pre>
<p>First, you'll see the index finding logic outside of the <code>await foreach</code>, and instead of using the <code>chatHistory</code> array, let's use the <code>messageHistory</code> array because that's what we care about updating.</p>
<p>Next, you'll see we use a simple string for concatenating the reponse which is then used to construct the <code>MarkupString</code>.</p>
<p>Lastly, the <code>chatHistory</code> array is updated with a new ChatMessage object <strong>after</strong> the reponse has completed streaming. This is what allows us to avoid trying to write to the ChatMessage's <code>.Text</code> property.</p>
<p>It is also more aligned with how Microsoft has documented that you add a <code>ChatMessage</code> to history.
<img src="/media/migrate-ai-ollama/Screenshot%202025-07-17%20234729.png"></p>
<p>After these changes, I was able to run the application and it worked as it did before.</p>
<p>Interestingly enough, there is a noticable speed up which may have been from avoiding the repeated steps to look up the index.
<img class="w-100" src="/media/migrate-ai-ollama/sidekicks-refactor.gif"></p>
<p>Until next time.</p>
]]></description>
      <pubDate>Fri, 18 Jul 2025 04:59:20 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/migrate-microsoft_extensions_ai_ollama-to-ollamasharp</guid>
    </item>
    <item>
      <title>Turing Pi 2</title>
      <link>https://www.davidpuplava.com/coding-craft/turing-pi-2</link>
      <description><![CDATA[<h2>New Shiny Objects</h2>
<p>As a programmer, I have a surprising interest in infrstratucture and devops.</p>
<p>I learned Kubernetes because it seemed very cool.</p>
<p>I setup a homelab using VMWare because it also seemed cool, but having a giant rack running in my closet seems strange. Plus the hardware is aging fast and already quite old.</p>
<p>Enter Turing Pi.</p>
<h2>Turing Pi Clusterboard</h2>
<p>I think this was probably a targeted ad on social media, but it sure got me good.</p>
<p>The Turing Pi 2 is a mini clusterboard that allows you to add multiple computer modules to for a mini computing cluster.</p>
<p>What is a mini computing cluster?</p>
<p>My simplistic understanding is that you can configure your cluster in all sorts of ways, one of which is to configure each compute module as a node in a a kubernetes cluster.</p>
<p>So that's what I did.</p>
<h2>Naked on the Desk</h2>
<p>I've had my Turing Pi 2 board for a long time but I've only started working with it recently because of personal time scarcity.</p>
<p>For quite a while, it was siting on my desk without a case just waiting for someone or something to break it in a bad way.</p>
<h2>Turing Pi Mini-ITX Case</h2>
<p>I just recently received my mini-ITX case from TuringPi and it has rejuvenated my interest in getting this new kubernetes cluster up an running.</p>
<p>Check back for future updates to this article for more details on this build.</p>
]]></description>
      <pubDate>Thu, 13 Feb 2025 23:53:16 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/turing-pi-2</guid>
    </item>
    <item>
      <title>How to Become a Software Engineer</title>
      <link>https://www.davidpuplava.com/coding-craft/how-to-become-a-software-engineer</link>
      <description><![CDATA[<p>Write code. Complete an app.</p>
<p>This is the short path you can take to becoming a software engineer.</p>
<h2>The Meaning of the Phrase</h2>
<p>It's important to ask yourself what you mean when you ask "how to become a software enginner" because different people may have different ideas about what that means.</p>
<p>I would argue that any person who can code a complete application qualifies as a software engineer.</p>
<p>Not all applications are equal, though.</p>
<h2>Full Stack Application</h2>
<p>Software comes in all shapes and sizes.</p>
<p>You have web apps, desktop apps, mobile apps, daemon/background apps, embedded apps, games, command line utlities, and more.</p>
<p>A "full stack developer" is one who is competent in all layers of a particular application stack, where "application stack" is loosely the different aspects of a software system. Namely: User Interface (UI) &amp; User Experience (UX); core logic or business logic rules; and  data persistence like a database.</p>
<p>That's a bit of an oversimplification but it covers the major areas necessary for a software system to provide some kind of value to some kind of users.</p>
<p>A person who can write code to deliver each part of the full stack of an application can certainly call themselves a software engineer.</p>
<h2>One Approach</h2>
<p>Here is a brief step by step of how one might become a software engineer, which is sort of a bare minimum that I'd look for what considering candidates for open software engineering positions at my company.</p>
<h3>1. Create, Retrieve, Update &amp; Delete (CRUD)</h3>
<p>Create a web application that allows a user to create, retrieve, update and delete some kind of data. This is a classic "CRUD" application that covers basic functionality found in most software used by end users.</p>
<p>One example of this is a "todo" application that allows you to keep track of your todo list.</p>
<p>Another example is a movie database for you and/or your family to track movies that you own.</p>
<p>Once you've mastered creating a web application to store, retrieve and update data in a database, you're ready to then apply those skills to a specific domain skill.</p>
<h3>2. Applied Skills</h3>
<p>Software engineers and software developers are modern day blacksmiths.</p>
<p>We take code and forge into into valuable functionality that a specific person or group finds useful for their work.</p>
<p>In essence, we make software for other people who have problems that software/technology can solve.</p>
<p>A more complex project to work on might be a double entry bookkeeping system for you to track your finances.</p>
<p>This example requies you to understand how double entry bookkeeping works and that you build a software system conform to the rules and constraints of that particular process.</p>
<p>There are certain rules, or business rules, that a bookkeeper (one who does bookkeeping) must follow to keep a set of financial books for a business.</p>
<p>As a software engineer, your responsibility is to understand the "domain" of that bookkeeper, to understand what bookkeeping is, and to create a software system that allows the bookkeeper to do his/her job with your software system.</p>
<p>Once you've take your software skills and applied them to a specific domain skill, that is how you know you've definitely become a software engineer.</p>
<p>The next step is to understand how your software application is just one part of a greater system.</p>
<h3>3. From Software to System</h3>
<p>Where a software application allows a users to do his or her job, a software "system" encompasses how that software application existence with other systems in this world.</p>
<p>For example, to build on the prior mentioned bookkeeping application, you can imagine how that application can exist alongside other systems, like the bank account for the business.</p>
<p>Imagine refactoring your bookkeeping applicaiton to interoperate with that bank account so it can automatically read transactions to allow the bookkeeper to classify them.</p>
<p>To achieve that, you would have to know and understand how (if possible) you can integrate with the bank's system likely through some kind of Application Programming Interface (API) to read data.</p>
<p>Most times when you want to read data from another system, you'd follow some kind of developer guide provided by the bank to interface with their system.</p>
<p>Once you've added an integration like this, you're no longer creating just a software application, but you're maintaining a software system.</p>
<h2>Trade Craft</h2>
<p>Software Engineering is a unique combintation of science, art and applied social skills.</p>
<p>And it certainly is a craft that is ever evolving requiring software engineers to change with it.</p>
<p>Consistency is critical.</p>
<p>Continually writing code to make applications is important to improve your craft.</p>
<p>Understand how tha craft can be applied to other people's profession is next level and further increases your stance and a software engineer.</p>
<p>And then finally, once you understand how you can integrate your application with other systems, you're an advanced software engineer that can apply systems level thinking to solve real problems.</p>
<p>Good luck and keep coding!</p>
<hr>
]]></description>
      <pubDate>Sat, 21 Dec 2024 05:44:37 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/how-to-become-a-software-engineer</guid>
    </item>
    <item>
      <title>My Orchard Core Journey</title>
      <link>https://www.davidpuplava.com/coding-craft/my-orchard-core-journey</link>
      <description><![CDATA[<p>Orchard Harvest 2024 was a great experience.</p>
<p>I've never been to a technology conference before, and although Orchard Harvest is a relatively small conference compared to other open source projects, it is by no means a small experience.</p>
<h2>The Conference</h2>
<p>This year's Orchard Harvest was the first in-person event in several years. With a few dozen attendees, the conference is small by comparison to other major opensource projects.</p>
<p>Nonetheless, the people there, including core maintainers, business owners, public sector users, and other Orchard Core enthusiasts, brought with them incredible passion surrounding the small but mighty content management system (CMS) and application framework.</p>
<h2>The Project</h2>
<p>I found Orchard Core a couple years ago while researching the best way add mutlti-tenancy to my client's custom software reporting system.</p>
<p>I quickly realized the immense value Orchard Core offers out of the box in terms of features.</p>
<p>You get user management, content management, authentication/authorizations, workflow management, and of course multi-tenancy (plus many, many other features!)</p>
<p>Even more valuable is the ability for Orchard Core to be added to an existing ASP.NET MVC (or Razor Pages) web project simply by referencing a single nuget package.</p>
<p>Lastly, the open source GitHub repository for Orchard Core illustrated the active engagement of a passionate community of developers and users.</p>
<p>I read all the documentation I could on Orchard Core, watched all the YouTube tutorial videos I found (thank you Lombiq!) and set out to do something I'd never done before.</p>
<p>I set out to get involved and contribute to an open source project.</p>
<h2>Open Source Development</h2>
<p>The experience was daunting.</p>
<p>I've developed .NET software systems for over 20 years, almost as long as .NET has been around.</p>
<p>My programming confidence was always high when engaging within my own teams on private projects.</p>
<p>But open source was different.</p>
<p>I would be submitting code that the whole world would see. And I was scared that I might embarass myself.</p>
<p>It took a long time to submit my first pull request. My imposter syndrome was strong, but after several month I garnered enough courage to submit my first PR for Orchard Core.</p>
<p>And I'm glad I did.</p>
<p>I learned a lot about the process to get your PR approved. The need to iterate on the changes, and the necessity to keep your PR branch in sync and up to date with the target branch.</p>
<p>The Orchard Core community was very welcoming and provided constructive feedback.</p>
<p>The experience was very rewarding. Enough so that I was more than happy to hop on a plane and join the community at Orchard Harvest 2024.</p>
<h2>What's Next</h2>
<p>I'll continue contributing to the Orchard Core project where I can.</p>
<p>I personally think it is a wonderfully useful technology for any .NET developer out there.</p>
<p>Even if you don't plan to use Orchard Core CMS or application framework in your project, you should at least explore the code to get an idea of how ASP.NET can be used.</p>
<p>In the words of one of the core maintainers for Orchard Core, the code is beautiful.</p>
]]></description>
      <pubDate>Sun, 15 Sep 2024 23:43:10 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/my-orchard-core-journey</guid>
    </item>
    <item>
      <title>Local Retrieval Augmented Generation (RAG)</title>
      <link>https://www.davidpuplava.com/local-rag</link>
      <description><![CDATA[<p>Large Language Models (LLMs) are a hot topic in the world of software development.</p>
<p>I personally think they're somewhat magical.</p>
<p>I would love to "chat" with my own data to gain insights, etc.</p>
<p>But I don't want to share my data with anyone.</p>
<p>Enter local Retrieval Augmented Generation (RAG).</p>
<h2>What is RAG?</h2>
<p>Retrieval Augmented Generation is a way for you to include your own data when chating with a Large Language Model (LLM) chatbot.</p>
<p>Including your own data gives tremendous context to the LLM to provide more personalized, or domain specific information not otherwised found in the large corpus of text used to train these language models.</p>
<p>Additionally, you can have the chatbot provide citations references to your own documents your user's consideration.</p>
<h2>What is Semantic Kernel?</h2>
<p>Semantic Kernel is Microsoft's framework for orchestrating artificial intelligence (AI) services.</p>
<p>You can learn more by checking out my series on it:</p>
<ul>
<li><a href="https://www.davidpuplava.com/coding-craft/semantic-kernel-sdk-review-part-1">Part 1 - Getting Started here</a></li>
<li><a href="https://www.davidpuplava.com/coding-craft/semantic-kernel-sdk-review-part-2">Part 2 - Planning, Memory &amp; Embeddings here</a>.</li>
</ul>
<p>I used Semantic Kernel because I love programming in .NET and C#.</p>
<h2>A Contrived Example</h2>
<p>Here is a basic list of facts that will simulate "my data", which I'll use to then chat with a local LLM and see what I get.</p>
<p>Credit: This example was inspired by another's blog post, but I cannot find it. I'll update my post if I find the orginal article I used when doing this.</p>
<pre><code>            var facts = new OrgFact[]
                {
                    new("Our headquarters is located in Sydney, Australia.", "Headquarters", "City: Sydney"),
                    new("We have been in business for 25 years.", "Years in Operation", "Years: 25"),
                    new("Our corporate sponsor is the Melbourne Football Club.", "Corporate Sponsorship", "Team: Melbourne Football Club"),
                    new("We have 2 major departments.", "Departments", "Number: 2"),
                    new("Our team includes developers among other professionals.", "Occupation", "Job Title: Developer"),
                    new("Our team enjoys outdoor activities such as bushwalking.", "Team Activities", "Activity: Bushwalking"),
                    new("We have a company pet policy that allows dogs.", "Company Pet Policy", "Type: Dog"),
                    new("We prefer catering options featuring Australian cuisine.", "Catering Preferences", "Cuisine: Australian"),
                    new("We have expanded our operations to 5 countries.", "International Presence", "Countries: 5"),
                    new("Our staff includes graduates from the University of Sydney.", "Education", "University: Sydney"),
                    new("Our team is multilingual, speaking 3 languages.", "Languages Spoken", "Number: 3"),
                    new("We have a strict allergen policy, including precautions for peanuts.", "Allergen Policy", "Allergen: Peanuts"),
                    new("We support athletic achievements, such as participating in marathons.", "Athletic Support", "Event: Marathon"),
                    new("We have a company-wide collection of Australian art.", "Company Initiatives", "Item: Australian Art"),
                    new("Our team enjoys the Australian spring season for company events.", "Seasonal Preferences", "Season: Spring"),
                    new("Our corporate book club's favorite book is 'The Book Thief'.", "Corporate Book Club", "Book: The Book Thief"),
                    new("We offer vegetarian, vegan, gluten free and halal options in our corporate diet policy.", "Dietary Policies", "Diet: Vegetarian"),
                    new("We actively support volunteering in local community projects.", "Community Engagement", "Place: Local Community Projects"),
                    new("We aim to expand our presence to every continent.", "Expansion Goals", "Goal: Every Continent"),
                    new("Many of our staff members hold advanced degrees, including in Computer Science.", "Advanced Education", "Degree: Master's in Computer Science")
                };
</code></pre>
<p>When loaded into my local chat bot, I get the following results:
<img class="w-100" src="/media/local-rag/GifMaker_20240809234435545.gif"></p>
<h2>Conclusion</h2>
<p>Retrieval Augemented Generation is a great way to chat with your data.</p>
<p>And Semantic Kernel is a C# developer's way to create some great AI experiences.</p>
]]></description>
      <pubDate>Sat, 10 Aug 2024 04:48:47 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/local-rag</guid>
    </item>
    <item>
      <title>Semantic Kernel SDK Review - Part 2</title>
      <link>https://www.davidpuplava.com/coding-craft/semantic-kernel-sdk-review-part-2</link>
      <description><![CDATA[<h2>Basics - Continued</h2>
<p>This is Part 2 of a multi-part review of Microsoft's Semantic Kernel SDK.</p>
<p>You can read <a href="https://www.davidpuplava.com/coding-craft/semantic-kernel-sdk-review-part-1">Part 1 - Getting Started here</a>.</p>
<h2>Intro</h2>
<p>In Part 1, we covered getting started with Microsoft's Semantic Kernel by working through the Polyglot Notebooks in the GitHub repository.</p>
<p>We covered tools used, contstructed a Kernel, creating prompts with arguments, and a simple chat bot example.</p>
<p>This part continues working through the Polyglot notebooks to cover planning, memory, embeddings and other features.</p>
<h2>Planning</h2>
<p>The concept of <code>Planning</code> in Semantic Kernel is interesting, and very cool.</p>
<p>If you recall from <a href="https://www.davidpuplava.com/coding-craft/semantic-kernel-sdk-review-part-1">Part 1</a>, a Kernel app composes together Kernel Plugins and Kernel Functions to achieve a desired outcome.</p>
<p>But how do these Plugins and Functions work together to solve the problem?</p>
<p>Planning.</p>
<p>Semantic Kernel uses a concept called <code>Planning</code> to decide when, how and with what arguments to call a Kernel's plugins and function.</p>
<p>More accurately, Semantic Kernel uses AI itself, a LLM to draft a plan to achieve that goal. This plan is a series of plugin/function calls to get to the final result.</p>
<h3>Old vs. New</h3>
<p>Interestingly, at the time of this writing, Semantic Kernel recommends that you use a feature of LLMs called "function calling" to create a plan for a user using your AI agent.</p>
<p>The "old" way was to use built in types for "Stepwise" and "Handlebar" planners is superseded in favor of using LLMs that support "function calling".</p>
<p>Note, that "function calling" requires your Chat Completion service to use an LLM that supports it.</p>
<h3>Planning Example</h3>
<p>Note, this Polyglot Notebook uses the "old" way with a Handerbars planner.</p>
<p>The subsequent Notebook in the Semantic Kernel repository makes use of the <code>SummarizePlugin</code> and <code>WriterPlugin</code> which are parameterized prompt templates.</p>
<p>The <code>SummarizePlugin</code> has function prompts for making abstracts readable, note generation, summarization, and topics.</p>
<p>The <code>WriterPlugin</code> has prompt functions for several things such as email, acronyms, novels, stories and poems.</p>
<p>As before, a basic Kernel app is constructed with an OpenAI chat completion service.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-15%20155028.png"></p>
<p>The next steps, rationally, are to configure the the Summarize and Writer Plugins within the Semantic Kernel.</p>
<p>Lastly, you can use the planner object, passing in your configured Kernel along with your request to get a plan of execution.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-15%20160321.png"></p>
<p>For this example, you are asking your Kernel to genereate multiple date ideas communicated using a poem.</p>
<p>As you can see, there are two steps to the plan.</p>
<p>The final step is to execute the generated plan using the object returned by Semantic Kernel's planning step. This gives you the desired output.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-15%20160912.png"></p>
<p>As a little twist, the Notebook has you add an inline function to rewrite output in a Shakespearean style.</p>
<p>Which you can add to your Kernel app, and then write write the date idea peoms in the style of Shakespeare.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-15%20161142.png"></p>
<h3>Analyzing the Example</h3>
<p>So what's going on here?</p>
<p>Looking more closely at the defined prompt of the <code>Rewrite</code> function in the <code>Writer</code> plugin, you see inputs for <code>$style</code> and the actual <code>$input</code> of the user request.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-15%20161616.png"></p>
<p>The planner uses the context given by the user describing how it is Valentine's Day and that you what date ideas written as a poem. It then is able to use the LLM to determine which values to extract from the user's request context, and configure that as the input Kernel Arguments for the <code>Rewrite</code> function in the <code>Writer</code> plugin.</p>
<p>You're essentially using the LLM to figure how to better prompt the LLM. Which is very cool.</p>
<h2>Memory</h2>
<p>Moving on to the next Juputer Notebook in Semantic Kernel's GitHub repository, we get to Semantic Memory, which is a way to persist state within your Kernel application for more interesting applications.</p>
<p>The start of the notebook is the familiar builder pattern setup for a Kernel application.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-15%20173210.png"></p>
<p>Conceptually, a Kernel plugin for Memory is used, but also a Memory storage technology.</p>
<p>The Polyglot notebook uses an in-memory solution called <code>VolatileMemoryStorage</code> which is NOT persistent across sessions but there are different storage implementations that give you persistence.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-15%20173722.png"></p>
<h3>Embeddings</h3>
<p>At the time of this writing, the Microsoft documentation for embeddings is limited. Looks like they are in the process of adding it.</p>
<p>I'll update this section in the future.</p>
<h3>Memory as a Data Structure</h3>
<p>The starting point is to manually add some contrived examples of embdedding information about a person.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-15%20174210.png"></p>
<p>In short, generating emdeddings is all about turning your information into a set of floating point numbers that correspond to relevenace and meaning.</p>
<p>Here you see the about of turning those memories into vector embeddings.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-15%20174403.png"></p>
<p>It's important to note that an OpenAI model for generating embeddings was used. You can see it was configured during initialization of the memory object.</p>
<h3>Total Recall</h3>
<p>It's nice that the tutorial refers back to the chat bot notebook from <a href="https://www.davidpuplava.com/coding-craft/semantic-kernel-sdk-review-part-1">Part 1</a> so you can see how memory builds on top of prior work.</p>
<p>Semantic Kernal has a native <code>TextMemoryPlugin</code> with a <code>recall</code> function that returns the most relevant memory currently found in the storage medium backing Semantic Memory.</p>
<p>Add that to your Kernel app by importing it as an object.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-16%20000052.png"></p>
<p>Now you can recreate the chat function that loops through your converstaion storing history, but using a prompt that is primed with the contrived information about yourself.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-16%20000308.png"></p>
<p>As always, be sure to set your Kernel Arguments to pass in.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-16%20000731.png"></p>
<p>Essentially, you now have a chat bot that has information about you, so you can ask it context senstive questions about yourself.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-16%20001012.png"></p>
<p>This demonstrates how you can use Semantic Memory, which is part of Semantic Kernel, as a way to inject context specific information.</p>
<p>Using this context like a user's personal information can provide a more personalized chat experience.</p>
<p>But this information is just contrived sample information, what is special about that?</p>
<p>Not much, but you can extend this concept to your own documents to get a better experience interacting with your data.</p>
<h3>Talk to Me Goose</h3>
<p>Have you ever wanted to chat with your data?</p>
<p>I have. And I think you will too.</p>
<p>Semantic Kernel provides a way to create embeddings of your own documents which you can then use within your Kernel app.</p>
<p>The notebook first has you build memories using files from the GitHub repository.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-16%20001620.png"></p>
<p>As before, you construct a memory builder object using the VolatileMemoryStore as the storage medium. You can see an example of that above.</p>
<p>Iterating over the array of URLs, you add them to your memory data structure.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-16%20001903.png"></p>
<p>Now you can search the memory construct for your documents and elicit information about it using natural languange.</p>
<p>In the notebooks example, you can get a list of Polyglot notebooks by saying how much you like them when asking how to get started.</p>
<p><img class="w-100" src="/media/sk-review-part-2/Screenshot%202024-07-16%20002035.png"></p>
<p>It's cool to imagine create a chat bot or other AI agent that can engage with your own data.</p>
<h3>Conclusion</h3>
<p>Getting deeper into the Semantic Kernel SDK, planning, memory and embeddings provide an enriching element to the art of design an AI agent.</p>
<p>Looking ahead to future parts, it looks like there are additional integrations with other AI services such as OpenAI's DALLE image generation service.</p>
<p>For details on getting started, see <a href="https://www.davidpuplava.com/coding-craft/semantic-kernel-sdk-review-part-1">Part 1</a> of this series.</p>
<hr>
]]></description>
      <pubDate>Wed, 17 Jul 2024 15:29:05 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/semantic-kernel-sdk-review-part-2</guid>
    </item>
    <item>
      <title>Semantic Kernel SDK Review - Part 1</title>
      <link>https://www.davidpuplava.com/coding-craft/semantic-kernel-sdk-review-part-1</link>
      <description><![CDATA[<p>This is Part 1 of a multi-part review of Microsoft's Semantic Kernel SDK.</p>
<p>You can read <a href="https://www.davidpuplava.com/coding-craft/semantic-kernel-sdk-review-part-2">Part 2 - Planning, Memory &amp; Embeddings here</a>.</p>
<h2>Intro</h2>
<p>Semantic Kernel is an open source software development kit (SDK) from Microsoft for building Artificial Intelligence (AI) agents.</p>
<p>As of July 2024, programming languages supported are C#, Python and Java.</p>
<p>I prefer C# and am fascinated by the world of AI and these so-called AI agents. So Semantic Kernel is a natural choice for exploring this niche of AI.</p>
<h2>Background</h2>
<p>I have a rudimentary working knowledge about AI and Large Language Models (LLMs), but otherwise, I am currently a complete beginner.</p>
<p>I like solving problems and am excited about the kinds of problems that AI might help me to solve.</p>
<p>My review of Semantic Kernel is less an authoritative opinion about it's efficacy, and more of an exploration into whether I can use it to solve certain problems I encounter on a regular basis.</p>
<p>This review is me getting started with Semantic Kernel's step-by-step walk through.</p>
<h2>Tools</h2>
<p>I am using Visual Studio Code and Git to clone the open source repository: <a href="https://github.com/microsoft/semantic-kernel">https://github.com/microsoft/semantic-kernel</a>.</p>
<h2>Step-by-Step</h2>
<p>The GitHub repository has a folder with several Polyglot notebooks, which this is my exposure to and think they are quite nice.</p>
<p>If you've never used a Polyglot notebook before, it's a cross between a Wiki page and a REPL app. You can read along with written documentation and then execute blocks of code inline with the documentation.</p>
<p>Very nice.</p>
<h3>Getting Started</h3>
<p>The Semantic Kernel SDK comes with support for loading LLMs from Microsoft Azure and OpenAI. All you need to provide is a handful of configuration settings like and API key and LLM model name.</p>
<p>For my walk through I used an OpenAI key to get started quickly. Long term, I hope to use Semantic Kernel with a local LLM but that will have to wait.</p>
<p>Programmatically, the SDK the latest dotnet prescribed builder patter to construct an Semantic Kernal type application.</p>
<p>With just a few lines of code, you're up and running with a Semantic Kernel specific application that is configured to use an Azure or OpenAI ChatCompletion service.</p>
<h3>Pause for Perspective</h3>
<p>Semantic Kernel essentially is providing a so-called "Kernel" application that allows you as the developer to orchestrate interaction with various services, both AI in nature and your own home grown services.</p>
<p>There are a few terms you need to familiarize yourself with like Semantic Plugins and Semantic Functions, which I call Plugins and Functions moving forward.</p>
<p>The objective is to compose together the right plugins and functions to solve your specific problem.</p>
<h3>Back to Tutorial</h3>
<p>The first Kernel Plugin you use is the "Fun Plugin" which essentiallys asks for a family friendly joke.</p>
<p><img class="w-100" src="/media/sk-getting-started/Screenshot%202024-07-12%20222327.png"></p>
<h3>Prompts, From File and Inline</h3>
<p>A core idea with Semantic Kernel is to construct an AI chat "prompt", which is specific instruction text you give to a LLM to describe the output you want it provide you.</p>
<p>Semantic Kernel prompts can exist in individual files on disk, or constructed in code also known as "inline".</p>
<p><img class="w-100" src="/media/sk-getting-started/Screenshot%202024-07-12%20223152.png"></p>
<h3>Let's Chat</h3>
<p>Now for more interesting stuff, which is a chat like experience that you'd expect from something like ChatGPT.</p>
<p>The key idea here is that these Kernel prompts have a special syntax where you as the developer can specify parameters that your user can pass in as arguments to the Kernel.</p>
<p>Semantic Kernel describes these as "Kernel Arguments" which is probably a misuses of the term "argument" but that is not important here.</p>
<p>What's important is that you can inject dynamic information into your Kernal's prompt. For example, see the following here where you have two Kernal Arguments. A <code>$history</code> for injecting your chat history information, and a <code>$userInput</code> for the next user message for the chat bot.</p>
<p><img class="w-100" src="/media/sk-getting-started/Screenshot%202024-07-12%20223514.png"></p>
<p>The rest of this notebook, you construct your Kernel Arguments object which is essentially a key/value dictionary and pass it to your constructed chat bot prompt to get a book suggestion.</p>
<p><img class="w-100" src="/media/sk-getting-started/Screenshot%202024-07-12%20224215.png"></p>
<p>This is surprisingly very little code to achieve exceptional results that, with a little effort, could rival the user experience for these large LLM services.</p>
<p>The rest of the notebook tutorial demonstrates how you can write a singular function to generate a chat response, store it in the history argument value and keep entering user input to get context aware, converstational output.</p>
<p><img class="w-100" src="/media/sk-getting-started/Screenshot%202024-07-12%20224938.png"></p>
<p>Again, with very little code, this is a very cool experience.</p>
<h2>Conclusion</h2>
<p>This is only my first look at the Semantic Kernel SDK.</p>
<p>I have a lot more of their step-by-step Polyglot Notebooks to get through, but overall I'm happy what you can achieve with very little code running Semantic Kernel.</p>
<p>Check back for future parts where I continue working through how Semantic Kernel can solve everyday problems.</p>
<ul>
<li><a href="https://www.davidpuplava.com/coding-craft/semantic-kernel-sdk-review-part-2">Part 2 - Planning, Memory &amp; Embeddings here</a>.</li>
</ul>
<hr>
]]></description>
      <pubDate>Sat, 10 Aug 2024 03:51:59 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/semantic-kernel-sdk-review-part-1</guid>
    </item>
    <item>
      <title>How to Fix Docker Compose Not Starting</title>
      <link>https://www.davidpuplava.com/coding-craft/how-to-fix-docker-compose-not-starting</link>
      <description><![CDATA[<p>After a power outage, one of the servers in my homelab did not come back gracefully.</p>
<p>This server uses Docker Compose to run a couple services I use locally.</p>
<h2>Make Sure it's Plugged In</h2>
<p>Logging into the server, I notice that restarting docker compose caused an error.</p>
<pre><code>$ sudo docker-compose up -d
Error response from daemon: Conflict. The container name “/foo” is already in use by container “edjd98889d8dfddd9090998ddd09898ddd234234”. You have to remove (or rename) that container to be able to reuse that name....
</code></pre>
<p>The top posts from a Google search discussed checking to see if anything was using that port, which was probably not the underlying cause.</p>
<h2>Troubleshooting Time</h2>
<p>So next came a set of troubleshooting steps.</p>
<p>First up was to check the <code>docker-compose.yaml</code> file to see if anything was out of the ordinary.</p>
<pre><code>$ vi docker-compose.yaml
</code></pre>
<p>Nothing unusual in the file, but I noticed that when I closed it, there was a quick flicker of vi's command status bar that flashed red showing some kind of error.</p>
<p>I re-opened the file, the did a <code>:qw</code> vi command to save the file and I received an "unable to save...out of disk space error".</p>
<h2>Getting Somewhere</h2>
<p>Alright, let's check the disk usage.</p>
<pre><code>$ df -h
\Filesystem                                                   Size  Used Avail Use% Mounted on
tmpfs                                                        392M  1.5M  390M   1% /run
/dev/mapper/server--vg-root                          193G   0  193G  100% /
tmpfs                                                        2.0G  4.0K  2.0G   1% /dev/shm
tmpfs                                                        5.0M     0  5.0M   0% /run/lock
/dev/sda1                                                    470M  254M  192M  57% /boot
nas.local:/volume1/serverbackups/server   14T  3.0T   11T  22% /mnt/nfs/serverbackups/server
tmpfs                                                        392M  4.0K  392M   1% /run/user/1000
/home/user/.Private                                      193G   0  193G  100% /home/user
</code></pre>
<p>Yep, 100% of the file system was used up which was very unexpected.</p>
<p>After several attempts to delete files, I was still at 100% file system in use.</p>
<p>Using series of the following command:</p>
<pre><code>$ sudo du -sch *
</code></pre>
<p>I concluded that the <code>/var/lib/docker</code> folder was using an unusual amount of disk space.</p>
<p>So it made sense to clean up docker.</p>
<h2>Trimming the Fat</h2>
<p>After years of use, I knew I had plenty of old images that could be removed.</p>
<p>So I ran the following command.</p>
<pre><code>$ sudo docker system prune
...
deleted: sha256:715a1b962166ede06c7a0e87d068a4b686e6066e0eca5ecab6f4d6cfab2121fe
deleted: sha256:97ab3baee34d0c75ee10e65c63a06cbc87d20d695c17d14ad565d4ff1b8dc2ca
deleted: sha256:9f54eef412758095c8079ac465d494a2872e02e90bf1fb5f12a1641c0d1bb78b

Total reclaimed space: 15.67GB
</code></pre>
<p>The output of the command show that I reclaimed 15.67G of space.</p>
<p>Awesome, time to start everything back up.</p>
<pre><code>$ sudo docker-compose up -d
Creating network "user_default" with the default driver
Creating service1  ... done
Creating service2 ... done
Creating service3 ... done
</code></pre>
<p>After everything restarted successfully, I could see that my disk file usage was back to normal, down almost 100G.</p>
<pre><code>$ df -h
Filesystem                                                   Size  Used Avail Use% Mounted on
tmpfs                                                        392M  1.5M  390M   1% /run
/dev/mapper/server--vg-root                          193G   91G   92G  50% /
tmpfs                                                        2.0G  4.0K  2.0G   1% /dev/shm
tmpfs                                                        5.0M     0  5.0M   0% /run/lock
/dev/sda1                                                    470M  254M  192M  57% /boot
san.local:/volume1/serverbackups/server   14T  3.0T   11T  22% /mnt/nfs/serverbackups/server
tmpfs                                                        392M  4.0K  392M   1% /run/user/1000
/home/user/.Private                                      193G   91G   92G  50% /home/user
</code></pre>
<h2>Conclusion</h2>
<p>Hard to say what happen but given that this occurred after a power outage, it's likely that the servers in my homelab did not start back up in the correct order.</p>
<p>Specifically, I think my network attached storage (NAS) server was off when the broken server tried to mount a drive used by the docker services.</p>
<p>This likely caused some sort of run away crash loop that filled up logs files in all the docker volumes.</p>
<p>Then after everything started successfully, the docker services probably clean up the log files or any large mounted files it was hanging onto.</p>
]]></description>
      <pubDate>Thu, 27 Jun 2024 03:37:06 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/how-to-fix-docker-compose-not-starting</guid>
    </item>
    <item>
      <title>Kubernetes: Refresh Certs with Microk8s Cluster</title>
      <link>https://www.davidpuplava.com/coding-craft/kubernetes-refresh-certs-with-microk8s-cluster</link>
      <description><![CDATA[<p>It's that time of year again where my homelab Kubernetes cluster (running microk8s) certificates expire.</p>
<p>I didn't even notice.</p>
<p>I happened to check one of my websites and notices nothing came back.</p>
<p>Early notification when there is a problem in my homelab is a discussion for another time.</p>
<p>Today, I want to run through what I did to get my system back up and running.</p>
<h2>Symptom</h2>
<p>My website are down, getting 404 not found or 503 service unavailable.</p>
<h2>Troubleshooting</h2>
<p>I first log into my kubernetes master node and check to see if my pods are running.</p>
<p>Note: that <code>k</code> is an alias for <code>kubectl</code>.</p>
<pre><code>$ k get all
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port?
</code></pre>
<p>Oh boy, now what?</p>
<p>I have a few years experience running a kubernetes cluster and am familiar with expired certificates and what a pain it is to fix them.</p>
<p>I happen to be running a lightweight distribution of kubernetes called <a href="https://microk8s.io">microk8s</a>. I found <a href="https://microk8s.io/docs/command-reference#heading--microk8s-refresh-certs">their documentation helpful regarding certificates</a>.</p>
<pre><code>$ sudo microk8s refresh-certs --help
[sudo] password for :
Usage: microk8s refresh-certs [OPTIONS] [CA_DIR]

  Replace the CA certificates with the ca.crt and ca.key found in CA_DIR.
  Omit the CA_DIR argument and use the '--cert' flag to auto-generate a new CA
  or any other certificate.

Options:
  -c, --check  Check the expiration time of the installed certificates
  -e, --cert   The certificate to be autogenerated, must be one of ['ca.crt', 'server.crt', 'front-proxy-client.crt']
  -u, --undo   Revert the last refresh performed
  -h, --help       Show this message and exit.
</code></pre>
<pre><code>$ sudo microk8s refresh-certs -c
The CA certificate will expire in 3276 days.
The server certificate will expire in 364 days.
The front proxy client certificate will expire in -9 days.
</code></pre>
<p>Whoops, I can't believe my sites have been down for 9 days.</p>
<p>Here's the fix to renew the expired certs:</p>
<p>First the <code>server.crt</code>.</p>
<pre><code>$ sudo microk8s refresh-certs -e server.crt
Taking a backup of the current certificates under /var/snap/microk8s/6673/certs-backup/
Creating new certificates
Signature ok
subject=C = GB, ST = Canonical, L = Canonical, O = Canonical, OU = Canonical, CN = 127.0.0.1
Getting CA Private Key
Restarting service kubelite.
Restarting service cluster-agent.
</code></pre>
<p>Then the <code>front-proxy-client.crt</code>.</p>
<pre><code>$ sudo microk8s refresh-certs -e front-proxy-client.crt
Taking a backup of the current certificates under /var/snap/microk8s/6673/certs-backup/
Creating new certificates
Signature ok
subject=CN = front-proxy-client
Getting CA Private Key
Restarting service kubelite.
</code></pre>
<p>Now, I can recheck to make sure everything looks good.</p>
<pre><code>$ sudo microk8s refresh-certs -c
The CA certificate will expire in 3276 days.
The server certificate will expire in 364 days.
The front proxy client certificate will expire in 364 days.
</code></pre>
<p>Awesome! Now let's see if I can access my kubernetes components.</p>
<pre><code>$ k get nodes
NAME            STATUS   ROLES    AGE    VERSION
kbndev01   Ready    &lt;none&gt;   366d   v1.26.15
kbndev02   Ready    &lt;none&gt;   366d   v1.26.15
kbmdev01   Ready    &lt;none&gt;   373d   v1.26.15
</code></pre>
<p>Excellent! Now I try to access one of my sites but still nothing is coming up.</p>
<h2>More Solutions, More Problems</h2>
<p>After several different troubleshooting steps of restarting services and rebooting all the nodes, I find this.</p>
<pre><code>$ k get nodes
NAME            STATUS     ROLES    AGE    VERSION
kbmdev01   Ready      &lt;none&gt;   373d   v1.26.15
kbndev01   NotReady   &lt;none&gt;   366d   v1.26.15
kbndev02   NotReady   &lt;none&gt;   366d   v1.26.15
</code></pre>
<p>Why are these nodes not ready?</p>
<p>Do I need to refresh the certs on them as well?</p>
<p>The answer is yes.</p>
<p>I log into each node individually and run the <code>refresh-certs</code> commands to fix the certs.</p>
<p>Then after a few seconds, all the nodes are ready.</p>
<pre><code>$ k get nodes
NAME            STATUS   ROLES    AGE    VERSION
kbmdev01   Ready    &lt;none&gt;   375d   v1.26.15
kbndev02   Ready    &lt;none&gt;   368d   v1.26.15
kbndev01   Ready    &lt;none&gt;   368d   v1.26.15
</code></pre>
<p>After that, all my sites were once again accessible.</p>
<h2>Conclusion</h2>
<p>If your kubernetes certificates are expired, especially when using the <code>microk8s</code> distribution, be sure to renew all the certs on all the nodes.</p>
]]></description>
      <pubDate>Wed, 12 Jun 2024 03:06:49 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/kubernetes-refresh-certs-with-microk8s-cluster</guid>
    </item>
    <item>
      <title>How to Use Private NuGet Feed with Docker Build</title>
      <link>https://www.davidpuplava.com/coding-craft/how-to-use-private-nuget-feed-with-docker-build</link>
      <description><![CDATA[<h2>Private NuGet Repository</h2>
<p>You can use a private NuGet repository in your docker build process. The best, most deterministic way to do this is to add a <code>NuGet.config</code> file to your repository with details.</p>
<pre><code>&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;configuration&gt;
    &lt;packageSources&gt;
        &lt;clear /&gt;
        &lt;add key="NuGet" value="https://api.nuget.org/v3/index.json" /&gt;
        &lt;add key="myfeed" value="https://www.davidpuplava.com/nuget/index.json" /&gt;
    &lt;/packageSources&gt;
    &lt;disabledPackageSources /&gt;
&lt;/configuration&gt;
</code></pre>
<p>Consider the following <code>Dockerfile</code> for building an ASP.NET web application. This work if your NuGet repository does not require authentication.</p>
<pre><code>FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build-env
LABEL stage=build-env

WORKDIR /app

# Copy and build
COPY ./src /app
COPY ./NuGet.config /app
RUN dotnet publish /app/MyApp.Web -c Release -o ./build/release --framework net8.0

# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:8.0
EXPOSE 80
ENV ASPNETCORE_URLS http://+:80
WORKDIR /app
COPY --from=build-env /app/build/release .
ENTRYPOINT ["dotnet", "MyApp.Web.dll"]
</code></pre>
<h2>Accessing Securely</h2>
<p>If your private NuGet feed requires authentication, you have a few options for how docker build can authenticate with your feed. One options is to store the credentials in plain text in the <code>NuGet.config</code> file but this is not very secure. Plus your intermediate layers from your docker build process will store this information.</p>
<p>You can environment variables to pass this information into your <code>docker build</code> process without storing secrets in your <code>NuGet.config</code> file.</p>
<p>You can take advantage of NuGet's default environment variable support <code>NuGetPackageSourceCredentials_&lt;package-name&gt;</code> where <code>&lt;package-name&gt;</code> is the name of your private repository.
When this environment variable is set, NuGet will automatically use it for authentication to that repository.</p>
<p>The formation for the value of the environment variable is <code>Username=...;Password=...;</code>. You can also add <code>ValidAuthenticationTypes=Basic</code> if you desire. You'd do this if you want to explicitly control how NuGet should authenticate against your repository.</p>
<p>Consider the following example.</p>
<pre><code>Username=myfeed-pat;Password=SuperSecretPWD!;ValidAuthenticationTypes=Basic
</code></pre>
<p>To configure, open Window's System Environment variables and click "New...". Enter the special environment variable name, and value.</p>
<p>In our working example we have a private NuGet repository named <code>myfeed</code> that is accessed with a personal access token username/password of <code>myfeed-pat/SuperSecretPWD!</code>.</p>
<p><img class="w-100" src="/media/docker-build-nuget/Screenshot%202024-05-29%20170233.png"></p>
<h2>Use Environment Variables in Docker Build</h2>
<p>After you configure your environment variable, you can use it in your <code>Dockerfile</code> by adding these two lines. The first line adds an argument to your docker build file. The second line assigns that argument value to your docker build's environment variable context.</p>
<pre><code>ARG NuGetPackageSourceCredentials_myfeed
</code></pre>
<pre><code>ENV NuGetPackageSourceCredentials_myfeed=$NuGetPackageSourceCredentials_myfeed
</code></pre>
<p>Here is the final <code>Dockerfile</code> using envinroment variables to authenticate against a secured private NuGet repository.</p>
<pre><code>FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build-env
LABEL stage=build-env

ARG NuGetPackageSourceCredentials_myfeed

WORKDIR /app

# Copy and build
COPY ./src /app
COPY ./NuGet.config /app
ENV NuGetPackageSourceCredentials_myfeed=$NuGetPackageSourceCredentials_myfeed
RUN dotnet publish /app/MyApp.Web -c Release -o ./build/release --framework net8.0

# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:8.0
EXPOSE 80
ENV ASPNETCORE_URLS http://+:80
WORKDIR /app
COPY --from=build-env /app/build/release .
ENTRYPOINT ["dotnet", "MyApp.Web.dll"]
</code></pre>
<h2>Passing Credentials</h2>
<p>You can now pass your build server's environment variable into docker build to set the docker build processes environment variable during the build process with <code>--build-arg</code>.</p>
<p>Here is the <code>docker build</code> command.</p>
<pre><code>docker build -f .\Dockerfile -t davidpuplava.com/nuget --build-arg NuGetPackageSourceCredentials_myfeed=$($Env:NuGetPackageSourceCredentials_myfeed) .
</code></pre>
<h2>Summary</h2>
<p>You can securely access a private NuGet repository with <code>docker build</code>  using environment variables to avoid storing secrets in source control.</p>
]]></description>
      <pubDate>Fri, 07 Jun 2024 16:28:26 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/how-to-use-private-nuget-feed-with-docker-build</guid>
    </item>
    <item>
      <title>How to Get PowerShell History Across All Sessions</title>
      <link>https://www.davidpuplava.com/coding-craft/how-to-get-powershell-history-across-all-sessions</link>
      <description><![CDATA[<p>To see your PowerShell history across all sessions, run the following command.</p>
<pre><code>cat (Get-PSReadLineOption).HistorySavePath | findstr cd
</code></pre>
<p>You will see result similar to the following.</p>
<pre><code>cd .\test\
cd .\Test.Mvc\
cd C:\git\Test
cd C:\git
cd .\Test\
cd ..
cd .\test-old-hyperadmin-etc\
</code></pre>
]]></description>
      <pubDate>Mon, 20 May 2024 14:48:47 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/how-to-get-powershell-history-across-all-sessions</guid>
    </item>
    <item>
      <title>Use Local Nuget Feed in Dockerfile</title>
      <link>https://www.davidpuplava.com/coding-craft/use-local-nuget-feed-in-dockerfile</link>
      <description><![CDATA[<p>"It works on my machine" but not when I'd build a docker image and using it in production.</p>
<p>Story of my life.</p>
<h2>The Symptom</h2>
<p>Changes I'd make and run locally worked as expected on my dev machine. But when I'd deploy them to production, they wouldn't be there.</p>
<h2>More Context</h2>
<p>More specifically, I was forked an open source library to make a change I needed.</p>
<p>I then rebuilt the Nuget package, stored it in a local nuget repository source and referenced that local nuget package in my app.</p>
<p>I would run the app locally on my dev machine and, boom, the change works flawlessly.</p>
<p>To move the change to production, I built a new docker image and deployed that new image to the production server.</p>
<p>AHHHH!!! The change wouldn't show up, even though I know I'm running the new image.</p>
<h2>The Problem</h2>
<p>After many hours of insanity, quite literally trying the same thing over and over expecting different results, I finally realized the cause of my problem.</p>
<p>The Dockerfile I used to build the docker container has no knowledge or access to the local nuget feed on my dev machine.</p>
<p>Why did it take so long to realize this?</p>
<p>Because the opensource library I was using is available publicly on nuget.org.
And the source of my frustration was that I did NOT, I repeat, I DID NOT, use a different/unique/not-currently-used-on-nuget-dot-org version number (laziness, I guess).</p>
<p>So, everything build without error. But the build process would grab the nuget package from nuget.org which did NOT have my changes that I packaged into the local nuget feed version of the same library.</p>
<h2>The Solution</h2>
<p>I solved this problem by</p>
<ol>
<li>Copying the local nuget feed packages to a folder in the root of my git repository.</li>
<li>Adding a COPY instruction to the Dockerfile to ensure that the local nuget packages are availble during the build step of the docker build.</li>
<li>Adding a RUN nuget add source command to add the local nuget feed as a nuget package source during the docker build.</li>
</ol>
<p>After I made that change, the new docker image work as expected in the production environment.</p>
]]></description>
      <pubDate>Fri, 26 Apr 2024 03:50:36 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/use-local-nuget-feed-in-dockerfile</guid>
    </item>
    <item>
      <title>Recursively remove bin/obj folders Powershell</title>
      <link>https://www.davidpuplava.com/coding-craft/recursively-remove-bin-obj-folders-powershell</link>
      <description><![CDATA[<h2>Update: July 28, 2025</h2>
<p>Here is a better command I found for removing only those <code>bin</code> and <code>obj</code> folders.</p>
<pre><code>Get-Childitem -Path . -Include bin,obj -Recurse -Directory | Where-Object { $_.FullName -notmatch '\\(node_modules|packages)\\' } | Remove-Item -Recurse -Force
</code></pre>
<p>Here is breakdown of the command.</p>
<p><code>Get-ChildItem -Path .</code> is used to get the file and directory objects in the current path. Optionally, you can change the argument passed to <code>-Path</code> to which ever path you like. If your projects folders are solely in the <code>./src</code> directory you can use <code>Get-ChildItem -Path ./src</code> instead to narrow your directory traversal.</p>
<p><code>-Include bin,obj</code> filters the child items to just the <code>bin</code> and <code>obj</code> folders. If you want to include different directories you can change the argument passed to <code>-Include</code>.</p>
<p><code>-Recurse -Directory</code> recursively goes through down through all the directories.</p>
<p>Note that this above command is three separate commands piped together. The first command <code>Get-Childitem -Path . -Include bin,obj -Recurse -Directory</code> is then piped to <code>Where-Object { $_.FullName -notmatch '\\(node_modules|packages)\\' }</code>.</p>
<p>Here is a breakdown of the <code>Where-Object { $_.FullName -notmatch '\\(node_modules|packages)\\' }</code> command.</p>
<p><code>Where-Object  { ... }</code> is an additional filter on all the directories that the first command is recursively navigating through. Everything inside the curly braces <code>{ }</code> is how it filters.</p>
<p><code>$_.FullName -notmatch '...'</code>  is doing a regular expression operation where <code>-notmatch</code> means that any directory paths that do NOT match the filter string (the thing between the single quotes) will be include. At a high level, this whole command is how you can exclude certain directories from being considered when deleting the <code>bin</code> and <code>obj</code> folders. In particular, the <code>node_modules</code> directory is ecluded because that is related to <code>gulp</code> operations and the NPM packages will download packages that have <code>bin</code> and <code>obj</code> folders. This is turn disrupts the <code>gulp</code> tooling which is not desired. Additionally, the <code>packages</code> folder is also excluded because that is sometimes used by <code>nuget</code> to store downloaded packages. I skip it here for the same reasons as I do for <code>node_modules</code>.</p>
<p>Essentially the entire second command <code>Where-Object { $_.FullName -notmatch '\\(node_modules|packages)\\' }</code> is about skipping those directories that have a <code>bin</code> and <code>obj</code> folder I want to keep before sending to the 3rd and final command <code>Remove-Item -Recurse -Force</code>.</p>
<p>The final command <code>Remove-Item -Recurse -Force</code> is what does the deleting of the directory.</p>
<p>The <code>Remove-Item</code> is the PowerShell command. The <code>-Recurse</code> will recursively delete the child items like files and additional folders. The <code>-Force</code> option deletes the directory without requiring you to confirm you want to delete a non-empty directory.</p>
<p>That's it. That's the command that I use to recursively remove <code>bin</code> and <code>obj</code> folders. Use this rather than the implementation below, which is there for reference.</p>
<h2>Original Implementation (DO NOT USE, kept here for legacy reasons)</h2>
<p>Run these commands to remove all the <code>bin</code> and <code>obj</code> folders.</p>
<pre><code>get-childitem $path -include bin -recurse | remove-item
get-childitem $path -include obj -recurse | remove-item
</code></pre>
<p>Special thanks to <a href="https://stevemichelotti.com/use-powershell-to-clean-your-visual-studio-solution">this post</a> for giving me the answer</p>
<h2>Why?</h2>
<p>Sometimes in the .NET solution, even the batch build's clean and rebuild commands do not clear out all files from the bin and obj folders. Reason being is the <code>msbuild</code> will not remove files it is not aware.</p>
<p>The exact scenario I need to solve was that I had changed the version number of a nuget library but when I rebuilt the solution, the old assembly file was in the bin folder and that was used rather that you new downgraded version.</p>
<p>More precisely in my case, I had repackages a new version of a local nuget package using the same exact version number. Because...reasons.</p>
<h2>Conclusion</h2>
<p>When it doubt, clean it out.</p>
]]></description>
      <pubDate>Mon, 28 Jul 2025 19:17:45 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/recursively-remove-bin-obj-folders-powershell</guid>
    </item>
    <item>
      <title>AT&amp;T Fiber - Own Router</title>
      <link>https://www.davidpuplava.com/coding-craft/at-t-fiber-own-router</link>
      <description><![CDATA[<h2>The Wait is Over</h2>
<p>I've waited a long time for AT&amp;T Fiber to arrive in my neighborhood and it is finally here. Naturally I signed up as soon as I could and scheduled my fiber installation at their next available slot.</p>
<p>The installation went great. The AT&amp;T technician, Richard, was very knowledgeable and was able to install the AT&amp;T Gateway in my server closet on the other side of my house from where the fiber entered my home.</p>
<p>AT&amp;T's default configuration is to leave you with working WIFI and you can connect to and start browsing the internet. This makes sense because most people do not wire their houses with ethernet.</p>
<p>As a software developer tech nerd, I'm not most people. My appetite for punishment lead me to wire my house for ethernet, at least for those devices that benefit from high throughput like my work computer and streaming devices.</p>
<p>Naturally, I have my own router with a custom configuration to handle all of that.</p>
<p>Additionally, this router needs to "live on the edge" as they say to handle some services that I host out of my own homelab.</p>
<p>What this means is I need to "bring my own router" to use with the AT&amp;T Gateway that was installed.</p>
<h2>Plug it in, and it works! Sort of...</h2>
<p>To use your own router with AT&amp;T's BGW320-505 gateway, you simply plug it into one of the yellow 1Gb ethernet ports on the back of the device. The blue 5Gb strongly recommends that you use CAT 7 ethernet, which I honestly thought was a made up category of ethernet but turns out I'm just too old to keep up with technology.</p>
<p><img class="w-100" src="/media/att-gateway-own-router/Screenshot%202024-03-22%20111246.png"></p>
<p>You then use a computer to connect to your AT&amp;T gateway accessing the default address <a href="http://192.168.1.254">http://192.168.1.254</a> to see your Gateway settings.</p>
<p><img class="w-100" src="/media/att-gateway-own-router/Screenshot%202024-03-22%20100440.png"></p>
<p>Navigate to the <code>Firewall</code> page, and then the <code>IP Passthrough</code> subpage and do the following:</p>
<ol>
<li>Set <code>Allocation Mode</code> to <code>Passthrough</code> to allow the AT&amp;T Gateway to pass your WAN IP address to your own router.</li>
<li>Set <code>Passthrough Mode</code> to <code>DHCPS-fixed</code> so you can specify your router as the client to receive the passed through WAN IP (rather than the first client that connects).</li>
<li>In the <code>Passthrough Fixed MAC Address</code> enter your router's MAC address which you can find on the back or side sticker physically attached to your router. You can also click the <code>Choose from list</code> to see the list of connected devices; if it's obvious which device is your router, select it to automatically fill in the <code>Manual Entry</code> box with your router's MAC address.</li>
</ol>
<p><img class="w-100" src="/media/att-gateway-own-router/Screenshot%202024-03-22%20104624.png"></p>
<p>After that, I restarted both the AT&amp;T Gateway, and my own router and everything worked great!</p>
<p>Until...</p>
<h2>I Couldn't Connect to the Internet</h2>
<p>When I woke up the next morning, nothing worked. I couldn't connect to the internet, I couldn't connect to the AT&amp;T Gateway, and I couldn't connect to my own router.</p>
<p>I did the usual steps of checking my <code>ipconfig</code> settings, <code>ping</code> and <code>tracert</code> but nothing seemed to work.</p>
<p>Oddly enough, email and a couple other apps would update with notifications as if it sort of worked, but everything was intermittent.</p>
<p>To connect to my AT&amp;T Gateway, I had to explicitly connect to the default WIFI and then everything from that computer worked; I could browse the internet, reach the router etc.</p>
<p>I finally found <a href="https://forums.att.com/conversations/att-internet-equipment/how-do-i-connect-a-router-printer-and-other-devices-to-my-att-uverse-gateway-information-from-the-att-community/5defcb4abad5f2f6067d0d33?source=ESSZ0SSPR00facsEM&amp;wtExtndSource=20180810190131_AT&amp;T%20Fiber%20Equipment_Wireline_LITHIUM_1718187401=">AT&amp;T's article on how to use your own router</a> and that helped me fix the issue.</p>
<p>The key issue for me was that AT&amp;T's Gateway used the same subnet <code>192.168.1.0/24</code> as my own router's subnet.</p>
<p>Even though the ranges didn't overlap, the power struggle was real and cause all sorts of issues.</p>
<p>The key point from AT&amp;T's article was this:</p>
<p><img class="w-100" src="/media/att-gateway-own-router/Screenshot%202024-03-22%20112337.png"></p>
<p>After I read that, resolving the issue was as simple as changing the default subnet for the AT&amp;T Gateway.</p>
<h2>Fixing the Problem</h2>
<p>Follow these steps to resolve the subnet conflict.</p>
<ol>
<li>Navigate to your AT&amp;T configuration page at <a href="http://192.168.1.254">http://192.168.1.254</a>.</li>
<li>Go to the <code>Home Network</code> tab.</li>
<li>Select the <code>Subnets &amp; DHCP</code> sub page.</li>
<li>Change the <code>Device IPv4 Address</code> to use a different IP address, for example <code>192.168.3.254</code>.</li>
<li>Update the DHCPv4 Start Address to use the subnet, in this case <code>192.168.3.x</code>, so enter the value <code>192.168.3.64</code>. Notice here, I'm only changing the third octet.</li>
<li>Do the same for the DHCPv4 End Address, here I enter <code>192.168.3.253</code>.</li>
<li>Click Save at the bottom of the screen.</li>
</ol>
<p><img class="w-100" src="/media/att-gateway-own-router/Screenshot%202024-03-22%20112839.png"></p>
<p>I restarted both my AT&amp;T Gateway and my router, just for good measure but I don't actually think it's necessary.</p>
<p>After I made the above change, everything started working.</p>
<h2>One Last Thing</h2>
<p>Since I already had a WIFI Access Point, I wanted to avoid having the AT&amp;T Gateway Access Point WIFI radio's from interfering, so I went ahead and turned them off.</p>
<ol>
<li>Navigate over to the <code>Home Network</code> tab.</li>
<li>Select the <code>Wi-Fi</code> sub page.</li>
<li>Click the  <code>Advanced Options</code> click to see more options displayed.</li>
</ol>
<p><img class="w-100" src="/media/att-gateway-own-router/Screenshot%202024-03-22%20113515.png"></p>
<ol start="4">
<li>From here, I set the <code>Wi-Fi Operation</code> to <code>Off</code> for the <code>2.4 GHz Wi-Fi Configuration</code>.</li>
</ol>
<p><img class="w-100" src="/media/att-gateway-own-router/Screenshot%202024-03-22%20113741.png"></p>
<ol start="5">
<li>Next, under the <code>5 GHz Wi-Fi Configuration</code> section, I set <code>Wi-Fi Operation</code> to <code>Off</code>.</li>
</ol>
<p><img class="w-100" src="/media/att-gateway-own-router/Screenshot%202024-03-22%20114030.png"></p>
<ol start="6">
<li>Lastly, don't forget to click the <code>Save...</code> button.</li>
</ol>
<p><img class="w-100" src="/media/att-gateway-own-router/Screenshot%202024-03-22%20114243.png"></p>
<ol start="7">
<li>When you see this warning screen, go ahead and click the <code>Continu.</code> button to disable the Wi-Fi radios on the AT&amp;T Gateway.</li>
</ol>
<p><img class="w-100" src="/media/att-gateway-own-router/Screenshot%202024-03-22%20114339.png"></p>
<h2>That's a Wrap!</h2>
<p>Thanks for reading and I hope you found this article useful if you run into the same issues connecting your own router to an AT&amp;T Gateway.</p>
<p>Special thanks to this <a href="https://www.youtube.com/watch?v=aShbl1JZMx8">YouTube video</a> as well which confirmed what I thought need to be done to use my own router.</p>
]]></description>
      <pubDate>Fri, 22 Mar 2024 16:57:58 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/at-t-fiber-own-router</guid>
    </item>
    <item>
      <title>Fix "file exceeds the maximum upload size" in OrchardCore</title>
      <link>https://www.davidpuplava.com/coding-craft/fix-image-size-upload-error-in-orchard-core</link>
      <description><![CDATA[<h2>Getting the File Exceeds the Maximum Upload Size</h2>
<p>If you receive a <code>file exceeds the maximum upload size</code> in Orchard Core and you're certain the settings allow for the size image you've uploaded, check to see if you have a reverse proxy the is erroring out <code>too big header</code> response sent from the upstream server.</p>
<p><img src="/media/file-exceeds-size/Screenshot%202024-01-12%20223126.png"></p>
<p>In my case, the problem was Nginx and not OrchardCore.</p>
<h2>502 Bad Gateway from OrchardCore running in Kubernetes</h2>
<p>In particular, I run Orchard Core in a Kubernetes cluster configured with an Nginx Ingress Controller.</p>
<p>My symptom was that I couldn't upload images larger than 768 KB even though my OrchardCore settings allowed images up to 4 MB.</p>
<p>After solving a different <a href="https://www.davidpuplava.com/coding-craft/fix-upstream-sent-too-big-header-error">502 Bad Gateway error when logging into Orchard</a>, I learned that the Nginx Ingress Controller was erroring out because it was running out of  buffer space.</p>
<p>From that troubleshooting experience, I learned that I can configure individual Ingresses to increase that buffer size.</p>
<h2>Increase Nginx Buffer Size with Annotations</h2>
<p>The solution was to add these annotations to my ingress for my particular OrchardCore site.</p>
<pre><code>    nginx.ingress.kubernetes.io/proxy-body-size: 30m
    nginx.ingress.kubernetes.io/proxy-buffer-size: 256k
    nginx.ingress.kubernetes.io/proxy-buffering: 'on'
    nginx.ingress.kubernetes.io/proxy-buffers-number: '4'
    nginx.ingress.kubernetes.io/proxy-max-temp-file-size: 1024m
</code></pre>
<p>And that's it!</p>
<p>After I deploy the above ingress annotations to increase the Nginx Ingress controller buffer size, I could sucessfully upload image files up to the size that was configured in OrchardCore media settings.</p>
]]></description>
      <pubDate>Tue, 16 Jan 2024 15:15:04 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/fix-image-size-upload-error-in-orchard-core</guid>
    </item>
    <item>
      <title>How to Fix "upstream sent too big header" Error</title>
      <link>https://www.davidpuplava.com/coding-craft/fix-upstream-sent-too-big-header-error</link>
      <description><![CDATA[<h2>First, the TLDR;</h2>
<blockquote>
<p>Annotate your Nginx ingress controller to increase the proxy buffering size for your upstream server. By default they're inadequately small and you end up with the following errors in your reverse proxy logs.</p>
</blockquote>
<p>For the following symptoms</p>
<ul>
<li>You access your web application, service or API and get a 502 Bad Request even though you're certain the operation should work as expected.</li>
<li>You also enable verbose logging and you're not seeing any log message or errors that would indicate what the problem might be.</li>
</ul>
<p>The more involved answer follows.</p>
<h2>Getting a 502 Bad Request for OrchardCore Running in Kubernetes</h2>
<p>Talk about deep troubleshooting. This one was several days in the making.</p>
<p>I was setting up a new website in my OrchardCore deployment. The tenant loaded just fine but I was getting a 502 Bad Request when trying to log in.</p>
<p>This was strange because the authentication code is all native Orchard Core (and ASP.NET) default stuff. But I had recently upgraded to OrchardCore 1.8.0 so maybe there was a problem with that.</p>
<p>I enabled verbose application logging and not event a mention of any kind of problem or error.</p>
<p>WTF.</p>
<h2>Check Kubernetes Nginx Ingress Controller Logs</h2>
<p>I run OrchardCore in a Kubernetes cluster.</p>
<p>This cluster uses an Nginx Ingress controller to handle incoming requests for this particular client application.</p>
<p>After seeing nothing interesting in OrchardCore's applicaiton logs, and also nothing in the Kestrel host logs, I decided to check my reverse proxy or ingress controller logs.</p>
<p>For my MicroK8s setup, my ingress pods are in their own ingress namespace. There are multiple pods and I want to see the logs for all of them. Luckly, they share a common label nginx-ingress-microk8s.</p>
<h2>Identify the Upstream Sent Too Big Header Error</h2>
<p>Run the following (or similar) command to see your ingress logs.</p>
<pre><code>kubectl logs -l name=nginx-ingress-microk8s -n ingress 
</code></pre>
<p>And search for upstream sent too big header like in the following log output.</p>
<pre><code>2024/01/10 15:30:05 [error] 711#711: *4185444 upstream sent too big header while reading response header from upstream, client: 10.1.40.193, server: www.*********.com, request: "POST /Login HTTP/2.0", upstream: "http://10.1.x.x:80/Login", host: "www.*********.com"
</code></pre>
<p>Boom, found it. After a little web searching you find out that Nginx has very small proxy buffers by default and they need to be increased.</p>
<h2>Add Ingress Annotations Configuration to Increase Nginx Proxy Buffer Size</h2>
<p>The solution was to add these annotations to the ingress configuration.</p>
<pre><code>    nginx.ingress.kubernetes.io/proxy-body-size: 30m
    nginx.ingress.kubernetes.io/proxy-buffer-size: 256k
    nginx.ingress.kubernetes.io/proxy-buffering: 'on'
    nginx.ingress.kubernetes.io/proxy-buffers-number: '4'
    nginx.ingress.kubernetes.io/proxy-max-temp-file-size: 1024m
</code></pre>
<p>And that's it. After I added and deployed these annotations to the Nginx Ingress for my application, I was able to successfully log in no problem.</p>
<p>Incidentally, this also fixed my <a href="https://www.davidpuplava.com/coding-craft/fix-image-size-upload-error-in-orchard-core">OrchardCore "file exceeds the maximum upload size" error,</a> which you can read about <a href="https://www.davidpuplava.com/coding-craft/fix-image-size-upload-error-in-orchard-core">here</a>.</p>
]]></description>
      <pubDate>Tue, 16 Jan 2024 15:15:04 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/fix-upstream-sent-too-big-header-error</guid>
    </item>
    <item>
      <title>Best 3D Printer for Beginners</title>
      <link>https://www.davidpuplava.com/coding-craft/best-3d-printer-for-beginners</link>
      <description><![CDATA[<p>The Ender-3 Pro. Or non-pro model. But get the Pro.</p>
<h2>Why?</h2>
<p>The Ender-3 Pro is very easy to put together. It prints very nicely, and really gives a beginner a sense of all the things that go into 3D printing.</p>
<p>It also gradually introduces you to some of the more challenging sides of 3D printing, so you can troubleshoot, them one at a time. For example, any of the <a href="https://www.davidpuplava.com/posts/ender-3-x-axis-fix">axis' might start acting weird</a>, and you'll have to work through solving that.</p>
<p>Also, the manufacturer, Creality, is big on their stuff being <a href="https://www.creality.com/products/ender-3-pro-3d-printer">open source</a> which is nice.</p>
<p>It's also priced just shy of $300 USD which is more affordable than some of the other "beginner" printers out there.</p>
<p>The price per quality ratio is the biggest factor. I strongly believe the Ender-3 Pro gives you the best value for your money.</p>
<h2>Some Caveats</h2>
<p>I have only owned two different 3D printers, so my sample size if very small.</p>
<p>That said, I knew nothing about 3D printing before I received the Ender-3 Pro as a gift several years ago.</p>
<p>And I was up and running and printed my first thing a couple days later.</p>
<p>I really think this is the best 3D printer for a beginner or hobbyist.</p>
<hr>
]]></description>
      <pubDate>Thu, 05 Dec 2024 21:08:56 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/best-3d-printer-for-beginners</guid>
    </item>
    <item>
      <title>OrchardCore 1.7 Released!</title>
      <link>https://www.davidpuplava.com/coding-craft/orchardcore-1-7-released</link>
      <description><![CDATA[<p>OrchardCore version 1.7 is now available for you to use!</p>
<p>In this post, I cover some of my favorite features of OrchardCore 1.7.</p>
<h1>Two-Factor Authentication</h1>
<p>OrchardCore now supports 2-factor authentication (2FA) with email and SMS messages. But by far my favorite feature is support for authenticator apps.</p>
<p>You can now use your prefered authenticator app such as Microsoft Authenticator or Google Authenticator and use one-time passwords (OTP).</p>
<p>You can enable two-factor authentication by navigating to the <code>Features</code> section of the <code>Admin</code> area of your OrchardCore site. Search for <code>two-factor</code> to filter the feature list. There you can choose to enable any and all methods: Email, SMS and Authenticator App.</p>
<p><img class="w-100" src="/media/orchardcore-1.7-released/Screenshot%202023-09-20%20170024.png"></p>
<p>Once enabled, you see a new menu item named <code>Security</code> under the user down carret menu in the upper right hand corner.</p>
<p><img src="/media/orchardcore-1.7-released/Screenshot%202023-09-20%20170115.png"></p>
<p>On the <code>TwoFactor</code> configuration setting screen, you see a list of the two-factor methods you enabled from the features screen. If you enabled Authenticator app, you'll see that option. If you have yet to set it up, you can click the <code>Add</code> button to get started. If already setup, you'll see <code>Set</code> and <code>Reset</code> buttons.</p>
<p><img class="w-100" src="/media/orchardcore-1.7-released/Screenshot%202023-09-20%20170125.png"></p>
<p>Click <code>Add</code> to get to the Configure Authenticator App screen.</p>
<p>On the configuration screen, you'll see everything you need to get setup. If you already have an authenticator app, you scan the QR code or type in the key code to your app. If you don't have an app, you'll see links to the Android and iOS app stores to get one set up.</p>
<p>Once your authenticator app has an entry for your site, you can look at the current OTP code and key it in step 3 on the screen to verify that the authenticator app was registered correctly.</p>
<p><img class="w-100" src="/media/orchardcore-1.7-released/Screenshot%202023-09-20%20170143.png"></p>
<p>After verifcation, you are all set up with 2FA using an authenticator app. When you log in with your password you'll be prompted for the one time password.</p>
<p><img class="w-100" src="/media/orchardcore-1.7-released/Screenshot%202023-09-20%20211851.png"></p>
<p>And that's it!</p>
<p>The best part, as is with all features in OrchardCore, is that this works for all your tenants (if enabled). It just works. And it's great.</p>
]]></description>
      <pubDate>Thu, 21 Sep 2023 03:14:01 GMT</pubDate>
      <guid isPermaLink="true">https://www.davidpuplava.com/coding-craft/orchardcore-1-7-released</guid>
    </item>
  </channel>
</rss>