Finite Element Method for Heat Exchangers

Suppose we have a couple of fluid streams exchanging heat (or mass) and want to compute the amount of heat (or mass) exchanged under given conditions of flow configuration and overall heat transfer coefficient U and area through the process. Furthermore, the streams are real -- defined numerically by points (Q,T) or an interpolate, instead of constant stream heat capacity. We want a method that converges with spatial resolution and converges with time for steady-state boundary conditions, and that hopefully gives a reasonable guess for transient conditions. Let's start with some assumptions:

  • We want a method that is first order in time, and preferably explicit.
  • The method may be spatially implicit.
  • The inlets (fluid state and rate of change) are specified as a boundary condition.
  • We have all the information we need about U, A, cross-section area, mass flow rate, and density
  • It is okay to violate transient conservation of energy insofar as we choose to treat flow as incompressible.
  • Assume the fluid state in a given cross-section is uniform, so we are not concerned with spatial resolution of the boundary layer, mixing, etc.
  • Assume that only advection acts to carry information downstream, ie. flow is fast enough to neglect heat transfer between points in the same stream.

As a result of some of the assumptions, it is possible for the method to overshoot, so let's keep the time step smallish.

The obvious starting point is to draw a cell with some $U \Delta A$, write out the transient first law, and extract the relevant equations for the rate of change of state. I'm going to define states at the inlet and outlet to the cell. So we have for one cell

$$ \Delta Q = \dot{m} (h_{in} - h_{out}) + \int_{cell} \dot{h} dm $$

Now we need to provide an finite element approximation of the mass integral / time rate of change term and estimate $\Delta Q$ resulting from interaction with the other stream. First, let's use a first order central method for the integral term:

$$ \int_{cell} \dot{h} dm \approx (\rho A_c \Delta x) \left(\frac{1}{2}\right)\left(\dot{h}_{in}+\dot{h}_{out}\right) $$

We will decide later how to collect the time rate of change terms. Meanwhile, the local heat transfer rate is

$$ Q = \epsilon Q_{max}, $$

where $Q_{max}$ is found for counter-flow by trying to take each stream to equilibrium with the other stream's inlet, and seeing which side is limiting. If we have local estimates of stream heat capacity, so much the better, as we can use the textbook definition. For counter-flow, effectiveness can be computed as

$$ \epsilon(NTU, C_r) = \begin{cases} \frac {1 - \exp[-NTU(1 - C_{r})]}{1 - C_{r}\exp[-NTU(1 - C_{r})]} & C_r < 1 \\[0.5ex] \frac{NTU}{1+NTU} & C_r = 1 \end{cases} $$

Our first numerical task will be to eliminate the pole-zero cancellation, and make this function behave well with respect to $C_r$. Let's visualize the function. ... see below

Now that we have some intuition about the function, let's discuss approximations. We have immediately available an approximation that does well for small NTU. However, we can slightly improve the accuracy for larger NTU by at least matching the derivatives wrt. $C_r$ at endpoints {0,1}. Let's take the derivatives there.

In [38]:
import sympy
sympy.init_printing()
ntu = sympy.symbols('ntu', positive=True)
c_r = sympy.symbols('c_r', nonnegative=True, nonzero=True)
expr = (1-sympy.exp(-(ntu * (1.-c_r)))) / (1- c_r * sympy.exp(-(ntu * (1.-c_r))))
dedn = sympy.diff(expr,c_r)
dedn
Out[38]:
$$- \frac{ntu e^{- ntu \left(- c_{r} + 1.0\right)}}{- c_{r} e^{- ntu \left(- c_{r} + 1.0\right)} + 1} + \frac{1}{\left(- c_{r} e^{- ntu \left(- c_{r} + 1.0\right)} + 1\right)^{2}} \left(1 - e^{- ntu \left(- c_{r} + 1.0\right)}\right) \left(c_{r} ntu e^{- ntu \left(- c_{r} + 1.0\right)} + e^{- ntu \left(- c_{r} + 1.0\right)}\right)$$
In [39]:
dedn0 = dedn.subs(c_r, 0)
dedn0
Out[39]:
$$- ntu e^{- 1.0 ntu} + \left(1 - e^{- 1.0 ntu}\right) e^{- 1.0 ntu}$$
In [31]:
dedn1 = dedn.limit(c_r, 1)
dedn1
Out[31]:
$$- \frac{ntu^{2}}{2 ntu^{2} + 4 ntu + 2}$$
In [42]:
alpha = sympy.symbols('alpha', positive=True)
fit = (1-sympy.power.Pow(c_r, alpha)) * (1-sympy.exp(-ntu)) + sympy.power.Pow(c_r, alpha) * ntu / (1 + ntu)
fit
Out[42]:
$$\frac{c_{r}^{\alpha} ntu}{ntu + 1} + \left(1 - e^{- ntu}\right) \left(- c_{r}^{\alpha} + 1\right)$$
In [48]:
dfdn = sympy.diff(fit,c_r).simplify()
dfdn
Out[48]:
$$\frac{\alpha c_{r}^{\alpha - 1} e^{- ntu}}{ntu + 1} \left(ntu - e^{ntu} + 1\right)$$
In [49]:
dfdn0 = dfdn.subs(c_r, 0)
dfdn0
Out[49]:
$$\frac{0^{\alpha - 1} \alpha e^{- ntu}}{ntu + 1} \left(ntu - e^{ntu} + 1\right)$$
In [50]:
dfdn1 = dfdn.subs(c_r, 1)
dfdn1
Out[50]:
$$\frac{\alpha e^{- ntu}}{ntu + 1} \left(ntu - e^{ntu} + 1\right)$$
In [102]:
?sympy.lambdify
dedn0_func = sympy.lambdify(c_r, dedn0, 'numpy')
dedn1_func = sympy.lambdify(c_r, dedn1, 'numpy')
In [78]:
import numpy
from numpy import exp, log, sqrt, power
def counterflow_effectiveness(ntu, c_r):
    a = exp(-(ntu * (1.-c_r)))
    return (1. - a) / (1. - c_r * a)
def counterflow_effectiveness_matched(ntu, c_r=None):
    return ntu / (1. + ntu)
def counterflow_effectiveness_bivariate(ntu,c_r):
    #e_ntu_0 = ntu
    #e_ntu_inf = 1
    #x = 1 - exp(-ntu)
    
    e_c_r_0 = 1 - exp(-ntu)
    e_c_r_1 = ntu / (1 + ntu)
    # NTU, pow
    #   1    1
    #   5    2
    #  10    5
    # 100  100
    alpha = power(c_r, 1 + 0.05 * ntu**1.5)
    beta = 1-alpha
    
    return beta * e_c_r_0 + alpha * e_c_r_1
In [79]:
import matplotlib.pyplot as plt
for ntu,fig in zip([0.1, 1, 2],[1,2,3]):
    c_r = numpy.linspace(0,1)
    ce = counterflow_effectiveness(ntu, c_r)
    c1 = counterflow_effectiveness_matched(ntu)
    cb = counterflow_effectiveness_bivariate(ntu, c_r)
    plt.figure(fig)
    plt.plot(c_r, ce, '.')
    plt.plot(1, c1, 'o')
    plt.plot(c_r, cb, '-')
    plt.title("NTU = {:g}, eps(0) = {:g}, eps(1) = {:g}".format(
        ntu, ce[0], ce[-1]))
for ntu,fig in zip([5, 10, 100],[4,4,4]):
    c_r = numpy.linspace(0,1)
    ce = counterflow_effectiveness(ntu, c_r)
    c1 = counterflow_effectiveness_matched(ntu)
    cb = counterflow_effectiveness_bivariate(ntu, c_r)
    plt.figure(fig)
    plt.plot(c_r, ce, '.', label="NTU = {:g}".format(
        ntu, ce[0], ce[-1]))
    plt.plot(1, c1, 'o')
    plt.plot(c_r, cb, '-')
plt.legend()
for ntu,fig in zip([0.1, 1, 2, 5, 10, 100],[5,5,5,5,5,5]):
    c_r = numpy.linspace(0,1)
    ce = counterflow_effectiveness(ntu, c_r)
    c1 = counterflow_effectiveness_matched(ntu)
    cb = counterflow_effectiveness_bivariate(ntu, c_r)
    plt.figure(fig)
    plt.plot(c_r, (1-ce), '.', label="NTU = {:g}".format(ntu))
    plt.plot(1, (1-c1), 'o')
    plt.plot(c_r, (1-cb), '-')
plt.legend()
plt.show()
C:\Users\user1\Miniconda3\envs\openachp\lib\site-packages\ipykernel_launcher.py:5: RuntimeWarning: invalid value encountered in true_divide
  """

What we see here is that as long as $NTU < 2$, we can treat effectiveness as linear in $C_r$. Our goal is indeed to have a spatially refined method, perhaps we can require the grid to be fine enough so that in each cell, $NTU < 1$. If this requires a very large number of cells, the effectiveness is nearly 1 already, so we shouldn't worry too much.

In [81]:
import matplotlib.pyplot as plt
#plt.figure()
for c_r in numpy.linspace(0.,1.,5):
    ntu = numpy.logspace(-10, 10)
    if c_r < 1:
        ce = counterflow_effectiveness(ntu, c_r)
    else:
        ce = counterflow_effectiveness_matched(ntu)
    plt.figure(1)
    plt.loglog(ntu, ce, '.',
               label="c_r = {:0.2f}, eps(NTU=0) = {:g}".format(c_r, ce[0]))
plt.loglog(ntu,ntu,'-')
plt.legend()
plt.show()
In [82]:
import matplotlib.pyplot as plt
plt.figure()
#ntu = numpy.logspace(-1, 10)
ntu2 = numpy.logspace(-1, 2)
for c_r, scale in zip(numpy.linspace(0.,1,5), [1,1,1,1,1]):
    
    if c_r < 1:
        ce = counterflow_effectiveness(ntu2, c_r)
    else:
        ce = counterflow_effectiveness_matched(ntu2)
    cb = counterflow_effectiveness_bivariate(ntu2,c_r)
    plt.loglog(ntu2, -log(1.-(ce)), '.',
               label="c_r = {:0.2f}".format(c_r))
    plt.loglog(ntu2, -log(1-cb), '--')
plt.loglog(ntu2, ntu2)
plt.legend()
plt.show()
C:\Users\user1\Miniconda3\envs\openachp\lib\site-packages\ipykernel_launcher.py:12: RuntimeWarning: divide by zero encountered in log
  if sys.path[0] == '':
C:\Users\user1\Miniconda3\envs\openachp\lib\site-packages\ipykernel_launcher.py:14: RuntimeWarning: divide by zero encountered in log
  

What we see here is that $\epsilon$ has the same trends with respect to $NTU$ regardless of $C_r$, except the shoulder moves out. This suggests we try fitting functions that have the same slope at $NTU = 0$ regardless of $C_r$, always converge to 1 as $NTU$ grows, but that "grow more slowly" as $C_r$ increases. For numerical uses, a satisfying fit ...

In [ ]: