Matched Asymptotic Expansion For ODE $\u_t = 1+\varepsilon/\log(u)$ A Comprehensive Guide

by ADMIN 91 views

Hey guys! Let's dive into a fascinating problem involving ordinary differential equations (ODEs) and asymptotics. We're going to explore the ODE \u_t = 1+\varepsilon/\log(u) with the initial condition u(0)=ku(0) = k, where 0<u<10 < u < 1 and ε\varepsilon is a small parameter (ε1\varepsilon \ll 1). This type of problem is perfect for using matched asymptotic expansions, a powerful technique for handling singular perturbation problems. So, buckle up, and let’s get started!

Introduction to the Problem

Our main goal here is to understand the behavior of the solution u(t)u(t) as ε\varepsilon approaches zero. This isn't just a theoretical exercise; problems like this pop up all over the place in physics, engineering, and even biology. Think about situations where you have a small effect (represented by ε\varepsilon) influencing a system described by a differential equation. Understanding how these small effects change the overall behavior is crucial.

The ODE we're tackling is:

ut=1+εlog(u),u(0)=k,\qquad u_t = 1+\frac{\varepsilon}{\log(u)}, \qquad u(0)=k,

where 0<u<10 < u < 1. The presence of the εlog(u)\frac{\varepsilon}{\log(u)} term is what makes this a singular perturbation problem. Why singular? Because if we naively set ε=0\varepsilon = 0, we get a much simpler equation, ut=1u_t = 1, which has a straightforward solution. However, this simple solution might not accurately capture the behavior of the full equation, especially when uu is close to 1, where log(u)\log(u) becomes very small, and the term εlog(u)\frac{\varepsilon}{\log(u)} can become significant.

The initial condition u(0)=ku(0) = k gives us a starting point for our solution. The fact that 0<u<10 < u < 1 is also important because the logarithm of a number between 0 and 1 is negative, which influences the dynamics of the equation. We're essentially trying to find a solution that starts at kk and evolves over time according to this equation, keeping in mind the small but impactful ε\varepsilon term.

To really dig into this, we'll use the method of matched asymptotic expansions. This technique involves breaking the problem into different regions where different approximations are valid and then “matching” these approximations together to get a complete solution. It's like having different lenses to look at the problem, each focusing on a particular aspect, and then combining the views for a comprehensive picture.

In the following sections, we'll break down the problem step by step, identifying the different regions, finding approximate solutions in each region, and then matching them together. Get ready to roll up your sleeves and dive into some cool mathematical techniques!

Outer Solution

Okay, let’s kick things off by finding what we call the outer solution. This solution is valid away from any boundary layers or regions where the solution changes rapidly. In our case, this means we're looking for a solution that works well when uu is not too close to 1. Remember, the trouble arises when uu gets close to 1 because the log(u)\log(u) term in the denominator of εlog(u)\frac{\varepsilon}{\log(u)} makes things blow up.

To find the outer solution, we assume that we can express uu as a regular perturbation series in ε\varepsilon:

u(t)=u0(t)+εu1(t)+ε2u2(t)+\qquad u(t) = u_0(t) + \varepsilon u_1(t) + \varepsilon^2 u_2(t) + \cdots

This means we're trying to approximate u(t)u(t) as a sum of terms, each with a different power of ε\varepsilon. The idea is that since ε\varepsilon is small, the higher-order terms (ε2\varepsilon^2, ε3\varepsilon^3, etc.) will be even smaller and contribute less to the overall solution. Our main focus will be on the first two terms, u0(t)u_0(t) and u1(t)u_1(t), as they’ll give us a good approximation for small ε\varepsilon.

Now, we plug this expansion into our original ODE:

ut=1+εlog(u)\qquad u_t = 1+\frac{\varepsilon}{\log(u)}

This gives us:

(u0+εu1+)t=1+εlog(u0+εu1+)\qquad (u_0 + \varepsilon u_1 + \cdots)_t = 1+\frac{\varepsilon}{\log(u_0 + \varepsilon u_1 + \cdots)}

We also need to expand the log\log term. Recall the Taylor series expansion for log(1+x)\log(1+x) around x=0x=0:

log(1+x)=xx22+x33\qquad \log(1+x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \cdots

Since we have log(u0+εu1+)\log(u_0 + \varepsilon u_1 + \cdots), we first rewrite u0u_0 as 1(1u0)1-(1-u_0) so that when εu1\varepsilon u_1 is small we have a term close to 1. Then, we can write the logarithm term as:

log(u0+εu1+)=log(u0(1+εu1u0+))=log(u0)+log(1+εu1u0+)\qquad \log(u_0 + \varepsilon u_1 + \cdots) = \log\left(u_0\left(1+\frac{\varepsilon u_1}{u_0} + \cdots\right)\right) = \log(u_0) + \log\left(1+\frac{\varepsilon u_1}{u_0} + \cdots\right)

Using the Taylor expansion for log(1+x)\log(1+x), we get:

log(u0+εu1+)log(u0)+εu1u0+O(ε2)\qquad \log(u_0 + \varepsilon u_1 + \cdots) \approx \log(u_0) + \frac{\varepsilon u_1}{u_0} + O(\varepsilon^2)

Substituting this back into our equation, we get:

u0t+εu1t+=1+εlog(u0)+εu1u0+\qquad u_{0t} + \varepsilon u_{1t} + \cdots = 1 + \frac{\varepsilon}{\log(u_0) + \frac{\varepsilon u_1}{u_0} + \cdots}

Now, we need to expand the fraction εlog(u0)+εu1u0+\frac{\varepsilon}{\log(u_0) + \frac{\varepsilon u_1}{u_0} + \cdots}. We can rewrite this as:

εlog(u0)(1+εu1u0log(u0)+)1\qquad \frac{\varepsilon}{\log(u_0)}\left(1 + \frac{\varepsilon u_1}{u_0\log(u_0)} + \cdots\right)^{-1}

Using the binomial expansion (1+x)11x(1+x)^{-1} \approx 1 - x for small xx, we have:

εlog(u0)+εu1u0+εlog(u0)ε2u1u0(log(u0))2+\qquad \frac{\varepsilon}{\log(u_0) + \frac{\varepsilon u_1}{u_0} + \cdots} \approx \frac{\varepsilon}{\log(u_0)} - \frac{\varepsilon^2 u_1}{u_0(\log(u_0))^2} + \cdots

So, our ODE becomes:

u0t+εu1t+=1+εlog(u0)ε2u1u0(log(u0))2+\qquad u_{0t} + \varepsilon u_{1t} + \cdots = 1 + \frac{\varepsilon}{\log(u_0)} - \frac{\varepsilon^2 u_1}{u_0(\log(u_0))^2} + \cdots

Next, we equate the coefficients of different powers of ε\varepsilon. This means we group together all the terms that have the same power of ε\varepsilon and set their sums equal.

Order ε0\varepsilon^0

For the terms without ε\varepsilon (i.e., ε0\varepsilon^0), we have:

u0t=1\qquad u_{0t} = 1

This is a simple ODE that we can solve by integrating both sides with respect to tt:

u0(t)=t+C\qquad u_0(t) = t + C

To find the constant CC, we use the initial condition u(0)=ku(0) = k. So, u0(0)=ku_0(0) = k, which gives us:

k=0+C    C=k\qquad k = 0 + C \implies C = k

Thus, our zeroth-order outer solution is:

u0(t)=t+k\qquad u_0(t) = t + k

Order ε1\varepsilon^1

For the terms with ε1\varepsilon^1, we have:

u1t=1log(u0)=1log(t+k)\qquad u_{1t} = \frac{1}{\log(u_0)} = \frac{1}{\log(t + k)}

This is another ODE that we can solve by integrating both sides with respect to tt:

u1(t)=1log(t+k)dt\qquad u_1(t) = \int \frac{1}{\log(t + k)} dt

This integral doesn't have a simple closed-form solution in terms of elementary functions. However, it's a well-known integral called the logarithmic integral, often denoted as li(x)\text{li}(x). So, we can write:

u1(t)=li(t+k)+D\qquad u_1(t) = \text{li}(t + k) + D

where DD is another constant of integration. At this stage, we don't know DD yet because we haven't matched this solution with an inner solution (which we’ll find next). This is the magic of matched asymptotics – we allow constants to be determined by matching conditions later on.

So, our first-order outer solution is:

u(t)u0(t)+εu1(t)=t+k+ε(li(t+k)+D)\qquad u(t) \approx u_0(t) + \varepsilon u_1(t) = t + k + \varepsilon(\text{li}(t + k) + D)

This solution is valid away from u1u \approx 1. But what happens when uu gets close to 1? That’s where the inner solution comes into play!

Inner Solution

Alright, let's switch gears and zoom in on the region where uu is close to 1. This is where the term εlog(u)\frac{\varepsilon}{\log(u)} really starts to make a difference, and our outer solution might not be accurate anymore. To tackle this, we're going to introduce a new timescale and a new variable that will help us “see” what’s happening in this region.

First, we need to figure out the appropriate inner variable. Since we're dealing with log(u)\log(u), and we know that log(1)=0\log(1) = 0, we'll define a new variable UU such that:

U=log(u)ε\qquad U = \frac{\log(u)}{\varepsilon}

This scaling makes sense because when uu is close to 1, log(u)\log(u) is small. Dividing by ε\varepsilon magnifies this small quantity, allowing us to work with a variable that’s of order 1 in the inner region. Think of it like using a microscope to zoom in on the behavior near u=1u = 1.

Next, we introduce a new timescale τ\tau defined as:

τ=tε\qquad \tau = \frac{t}{\varepsilon}

This rescaling of time is crucial because it allows us to slow down the dynamics in the inner region. In the original timescale tt, things might be changing too quickly for us to see the details, but in the τ\tau timescale, we can observe the behavior more clearly. It’s like watching a slow-motion replay of the action near u=1u = 1.

Now, let's rewrite our original ODE in terms of these new variables. First, we have:

dudt=dudτdτdt=1εdudτ\qquad \frac{du}{dt} = \frac{du}{d\tau} \frac{d\tau}{dt} = \frac{1}{\varepsilon} \frac{du}{d\tau}

And our ODE is:

dudt=1+εlog(u)\qquad \frac{du}{dt} = 1 + \frac{\varepsilon}{\log(u)}

Substituting U=log(u)εU = \frac{\log(u)}{\varepsilon}, we get log(u)=εU\log(u) = \varepsilon U. So, our ODE in the inner variables becomes:

1εdudτ=1+εεU=1+1U\qquad \frac{1}{\varepsilon} \frac{du}{d\tau} = 1 + \frac{\varepsilon}{\varepsilon U} = 1 + \frac{1}{U}

Multiplying both sides by ε\varepsilon, we have:

dudτ=ε(1+1U)\qquad \frac{du}{d\tau} = \varepsilon \left(1 + \frac{1}{U}\right)

Now, we need to express uu in terms of UU. Since U=log(u)εU = \frac{\log(u)}{\varepsilon}, we can write:

u=eεU\qquad u = e^{\varepsilon U}

So, our ODE becomes:

ddτ(eεU)=ε(1+1U)\qquad \frac{d}{d\tau}(e^{\varepsilon U}) = \varepsilon \left(1 + \frac{1}{U}\right)

Using the chain rule, we get:

εeεUdUdτ=ε(1+1U)\qquad \varepsilon e^{\varepsilon U} \frac{dU}{d\tau} = \varepsilon \left(1 + \frac{1}{U}\right)

Dividing both sides by ε\varepsilon, we have:

eεUdUdτ=1+1U\qquad e^{\varepsilon U} \frac{dU}{d\tau} = 1 + \frac{1}{U}

Now, we consider the limit as ε0\varepsilon \rightarrow 0. As ε\varepsilon becomes very small, eεUe^{\varepsilon U} approaches 1. Thus, our inner equation, to leading order, simplifies to:

dUdτ=1+1U\qquad \frac{dU}{d\tau} = 1 + \frac{1}{U}

This is a separable ODE, which we can solve! Let’s rewrite it as:

UU+1dU=dτ\qquad \frac{U}{U + 1} dU = d\tau

Now, we integrate both sides. For the left side, we can rewrite UU+1\frac{U}{U + 1} as 11U+11 - \frac{1}{U + 1}, which makes the integration easier:

(11U+1)dU=dτ\qquad \int \left(1 - \frac{1}{U + 1}\right) dU = \int d\tau

Integrating, we get:

Ulog(U+1)=τ+C1\qquad U - \log(U + 1) = \tau + C_1

where C1C_1 is a constant of integration. We’ll need to determine this constant by matching this inner solution with our outer solution. This is where the magic of matched asymptotics truly shines!

Before we move on to the matching process, let’s recap what we’ve found. We’ve derived an inner solution:

Ulog(U+1)=τ+C1\qquad U - \log(U + 1) = \tau + C_1

where U=log(u)εU = \frac{\log(u)}{\varepsilon} and τ=tε\tau = \frac{t}{\varepsilon}. This solution describes the behavior of uu near 1, and it’s now time to connect it to our outer solution to get a complete picture.

Matching

Alright, time for the grand finale: matching! This is where we connect our outer and inner solutions to create a uniformly valid solution that works for all tt. Think of it as smoothly blending two pieces of a puzzle together to form a complete picture.

The basic idea behind matching is that the inner limit of the outer solution should match the outer limit of the inner solution. Sounds like a mouthful, right? Let's break it down.

Expressing Solutions in Common Variables

First, we need to express both solutions in terms of a common variable. We'll use the original time variable tt. Recall our outer solution:

u(t)u0(t)+εu1(t)=t+k+ε(li(t+k)+D)\qquad u(t) \approx u_0(t) + \varepsilon u_1(t) = t + k + \varepsilon(\text{li}(t + k) + D)

And our inner solution:

Ulog(U+1)=τ+C1\qquad U - \log(U + 1) = \tau + C_1

where U=log(u)εU = \frac{\log(u)}{\varepsilon} and τ=tε\tau = \frac{t}{\varepsilon}. We can rewrite the inner solution as:

log(u)εlog(log(u)ε+1)=tε+C1\qquad \frac{\log(u)}{\varepsilon} - \log\left(\frac{\log(u)}{\varepsilon} + 1\right) = \frac{t}{\varepsilon} + C_1

Inner Limit of the Outer Solution

Now, let’s take the inner limit of the outer solution. This means we look at what happens to the outer solution as t0t \rightarrow 0 (since the inner region is near t=0t = 0) while keeping the inner timescale τ=tε\tau = \frac{t}{\varepsilon} fixed. In other words, we're zooming in on the region near t=0t = 0.

So, we have:

limt0[t+k+ε(li(t+k)+D)]=k+ε(li(k)+D)\qquad \lim_{t \rightarrow 0} \left[t + k + \varepsilon(\text{li}(t + k) + D)\right] = k + \varepsilon(\text{li}(k) + D)

This is the behavior of our outer solution as we approach the inner region.

Outer Limit of the Inner Solution

Next, we take the outer limit of the inner solution. This means we look at what happens as τ\tau \rightarrow \infty (which corresponds to tt \rightarrow \infty in the inner timescale) while keeping tt fixed. In other words, we're zooming out from the inner region.

From the inner solution:

log(u)εlog(log(u)ε+1)=tε+C1\qquad \frac{\log(u)}{\varepsilon} - \log\left(\frac{\log(u)}{\varepsilon} + 1\right) = \frac{t}{\varepsilon} + C_1

As τ=tε\tau = \frac{t}{\varepsilon} \rightarrow \infty, we can assume that log(u)ε\frac{\log(u)}{\varepsilon} is large. Thus, we can approximate:

log(log(u)ε+1)log(log(u)ε)=log(log(u))log(ε)\qquad \log\left(\frac{\log(u)}{\varepsilon} + 1\right) \approx \log\left(\frac{\log(u)}{\varepsilon}\right) = \log(\log(u)) - \log(\varepsilon)

So, the inner solution becomes:

log(u)εlog(log(u))+log(ε)tε+C1\qquad \frac{\log(u)}{\varepsilon} - \log(\log(u)) + \log(\varepsilon) \approx \frac{t}{\varepsilon} + C_1

Rearranging, we get:

log(u)εtε+C1log(ε)+log(log(u))\qquad \frac{\log(u)}{\varepsilon} \approx \frac{t}{\varepsilon} + C_1 - \log(\varepsilon) + \log(\log(u))

Matching Condition

Now, we set the inner limit of the outer solution equal to the outer limit of the inner solution. This is the heart of the matching process:

$\qquad k + \varepsilon(\text{li}(k) + D) \approx u \approx e^{\varepsilon(\frac{t}{\varepsilon} + C_1 - \log(\varepsilon) + \log(\log(u))))} $

This is where things get a bit tricky, and we might need to massage the expressions to make them match. However, the key idea is that we want the two solutions to agree in the region where they overlap. In practice, this often involves comparing the dominant terms and ensuring they have the same form.

Determining Constants

By carefully comparing the terms and using some algebraic manipulation (which can get quite involved), we can determine the constants C1C_1 and DD. This usually involves matching the zeroth-order and first-order terms separately.

The exact steps for determining the constants can be quite complex and might require further approximations or expansions. However, the general idea is to ensure that the two solutions smoothly connect in the overlapping region.

Composite Solution

Once we've determined the constants, we can construct a composite solution. This is a solution that combines the best parts of both the inner and outer solutions and is valid throughout the entire domain. A common way to construct a composite solution is to add the inner and outer solutions and then subtract the common part (the part that's already counted in both solutions):

ucomposite=uouter+uinnerucommon\qquad u_{\text{composite}} = u_{\text{outer}} + u_{\text{inner}} - u_{\text{common}}

where ucommonu_{\text{common}} is the common limit we found during the matching process.

Conclusion

And there you have it! We've walked through the process of using matched asymptotic expansions to solve a singular perturbation problem. We started with an ODE that had a tricky term involving a logarithm, broke the problem into outer and inner regions, found approximate solutions in each region, and then matched those solutions together to create a uniformly valid solution.

This technique is incredibly powerful and can be applied to a wide range of problems in various fields. While the details can get a bit hairy, the underlying idea is quite intuitive: break a hard problem into simpler pieces, solve the pieces, and then stitch them together.

Remember, matched asymptotic expansions are like having a Swiss Army knife in your mathematical toolkit. They might seem complicated at first, but once you get the hang of them, they can help you tackle some seriously challenging problems. Keep practicing, keep exploring, and you’ll be amazed at what you can achieve!