Suppose we have a couple of fluid streams exchanging heat (or mass) and want to compute the amount of heat (or mass) exchanged under given conditions of flow configuration and overall heat transfer coefficient U and area through the process. Furthermore, the streams are real -- defined numerically by points (Q,T) or an interpolate, instead of constant stream heat capacity. We want a method that converges with spatial resolution and converges with time for steady-state boundary conditions, and that hopefully gives a reasonable guess for transient conditions. Let's start with some assumptions:
As a result of some of the assumptions, it is possible for the method to overshoot, so let's keep the time step smallish.
The obvious starting point is to draw a cell with some $U \Delta A$, write out the transient first law, and extract the relevant equations for the rate of change of state. I'm going to define states at the inlet and outlet to the cell. So we have for one cell
$$ \Delta Q = \dot{m} (h_{in} - h_{out}) + \int_{cell} \dot{h} dm $$Now we need to provide an finite element approximation of the mass integral / time rate of change term and estimate $\Delta Q$ resulting from interaction with the other stream. First, let's use a first order central method for the integral term:
$$ \int_{cell} \dot{h} dm \approx (\rho A_c \Delta x) \left(\frac{1}{2}\right)\left(\dot{h}_{in}+\dot{h}_{out}\right) $$We will decide later how to collect the time rate of change terms. Meanwhile, the local heat transfer rate is
$$ Q = \epsilon Q_{max}, $$where $Q_{max}$ is found for counter-flow by trying to take each stream to equilibrium with the other stream's inlet, and seeing which side is limiting. If we have local estimates of stream heat capacity, so much the better, as we can use the textbook definition. For counter-flow, effectiveness can be computed as
$$ \epsilon(NTU, C_r) = \begin{cases} \frac {1 - \exp[-NTU(1 - C_{r})]}{1 - C_{r}\exp[-NTU(1 - C_{r})]} & C_r < 1 \\[0.5ex] \frac{NTU}{1+NTU} & C_r = 1 \end{cases} $$Our first numerical task will be to eliminate the pole-zero cancellation, and make this function behave well with respect to $C_r$. Let's visualize the function. ... see below
Now that we have some intuition about the function, let's discuss approximations. We have immediately available an approximation that does well for small NTU. However, we can slightly improve the accuracy for larger NTU by at least matching the derivatives wrt. $C_r$ at endpoints {0,1}. Let's take the derivatives there.
import sympy
sympy.init_printing()
ntu = sympy.symbols('ntu', positive=True)
c_r = sympy.symbols('c_r', nonnegative=True, nonzero=True)
expr = (1-sympy.exp(-(ntu * (1.-c_r)))) / (1- c_r * sympy.exp(-(ntu * (1.-c_r))))
dedn = sympy.diff(expr,c_r)
dedn
dedn0 = dedn.subs(c_r, 0)
dedn0
dedn1 = dedn.limit(c_r, 1)
dedn1
alpha = sympy.symbols('alpha', positive=True)
fit = (1-sympy.power.Pow(c_r, alpha)) * (1-sympy.exp(-ntu)) + sympy.power.Pow(c_r, alpha) * ntu / (1 + ntu)
fit
dfdn = sympy.diff(fit,c_r).simplify()
dfdn
dfdn0 = dfdn.subs(c_r, 0)
dfdn0
dfdn1 = dfdn.subs(c_r, 1)
dfdn1
?sympy.lambdify
dedn0_func = sympy.lambdify(c_r, dedn0, 'numpy')
dedn1_func = sympy.lambdify(c_r, dedn1, 'numpy')
import numpy
from numpy import exp, log, sqrt, power
def counterflow_effectiveness(ntu, c_r):
a = exp(-(ntu * (1.-c_r)))
return (1. - a) / (1. - c_r * a)
def counterflow_effectiveness_matched(ntu, c_r=None):
return ntu / (1. + ntu)
def counterflow_effectiveness_bivariate(ntu,c_r):
#e_ntu_0 = ntu
#e_ntu_inf = 1
#x = 1 - exp(-ntu)
e_c_r_0 = 1 - exp(-ntu)
e_c_r_1 = ntu / (1 + ntu)
# NTU, pow
# 1 1
# 5 2
# 10 5
# 100 100
alpha = power(c_r, 1 + 0.05 * ntu**1.5)
beta = 1-alpha
return beta * e_c_r_0 + alpha * e_c_r_1
import matplotlib.pyplot as plt
for ntu,fig in zip([0.1, 1, 2],[1,2,3]):
c_r = numpy.linspace(0,1)
ce = counterflow_effectiveness(ntu, c_r)
c1 = counterflow_effectiveness_matched(ntu)
cb = counterflow_effectiveness_bivariate(ntu, c_r)
plt.figure(fig)
plt.plot(c_r, ce, '.')
plt.plot(1, c1, 'o')
plt.plot(c_r, cb, '-')
plt.title("NTU = {:g}, eps(0) = {:g}, eps(1) = {:g}".format(
ntu, ce[0], ce[-1]))
for ntu,fig in zip([5, 10, 100],[4,4,4]):
c_r = numpy.linspace(0,1)
ce = counterflow_effectiveness(ntu, c_r)
c1 = counterflow_effectiveness_matched(ntu)
cb = counterflow_effectiveness_bivariate(ntu, c_r)
plt.figure(fig)
plt.plot(c_r, ce, '.', label="NTU = {:g}".format(
ntu, ce[0], ce[-1]))
plt.plot(1, c1, 'o')
plt.plot(c_r, cb, '-')
plt.legend()
for ntu,fig in zip([0.1, 1, 2, 5, 10, 100],[5,5,5,5,5,5]):
c_r = numpy.linspace(0,1)
ce = counterflow_effectiveness(ntu, c_r)
c1 = counterflow_effectiveness_matched(ntu)
cb = counterflow_effectiveness_bivariate(ntu, c_r)
plt.figure(fig)
plt.plot(c_r, (1-ce), '.', label="NTU = {:g}".format(ntu))
plt.plot(1, (1-c1), 'o')
plt.plot(c_r, (1-cb), '-')
plt.legend()
plt.show()
What we see here is that as long as $NTU < 2$, we can treat effectiveness as linear in $C_r$. Our goal is indeed to have a spatially refined method, perhaps we can require the grid to be fine enough so that in each cell, $NTU < 1$. If this requires a very large number of cells, the effectiveness is nearly 1 already, so we shouldn't worry too much.
import matplotlib.pyplot as plt
#plt.figure()
for c_r in numpy.linspace(0.,1.,5):
ntu = numpy.logspace(-10, 10)
if c_r < 1:
ce = counterflow_effectiveness(ntu, c_r)
else:
ce = counterflow_effectiveness_matched(ntu)
plt.figure(1)
plt.loglog(ntu, ce, '.',
label="c_r = {:0.2f}, eps(NTU=0) = {:g}".format(c_r, ce[0]))
plt.loglog(ntu,ntu,'-')
plt.legend()
plt.show()
import matplotlib.pyplot as plt
plt.figure()
#ntu = numpy.logspace(-1, 10)
ntu2 = numpy.logspace(-1, 2)
for c_r, scale in zip(numpy.linspace(0.,1,5), [1,1,1,1,1]):
if c_r < 1:
ce = counterflow_effectiveness(ntu2, c_r)
else:
ce = counterflow_effectiveness_matched(ntu2)
cb = counterflow_effectiveness_bivariate(ntu2,c_r)
plt.loglog(ntu2, -log(1.-(ce)), '.',
label="c_r = {:0.2f}".format(c_r))
plt.loglog(ntu2, -log(1-cb), '--')
plt.loglog(ntu2, ntu2)
plt.legend()
plt.show()
What we see here is that $\epsilon$ has the same trends with respect to $NTU$ regardless of $C_r$, except the shoulder moves out. This suggests we try fitting functions that have the same slope at $NTU = 0$ regardless of $C_r$, always converge to 1 as $NTU$ grows, but that "grow more slowly" as $C_r$ increases. For numerical uses, a satisfying fit ...