序列⼆次规划_函数式的动态规划
函数式的动态规划
动态规划是⼀类很常⽤的算法,在C/C++/Java中⼀般使⽤于数组进⾏记忆化。⽽函数式编程语⾔⼀般⽆法⽅便地操作数组这些依赖副作⽤的数据结构,函数式的记忆化便要另寻他法。
本⽂就是⼀个简单的笔记,⽤⼀些代码⽚段展⽰我所知的函数式动态规划的技巧。
(2020/5/17,时隔五个⽉后的更新,新增Memocombinators)
Course-of-Values Recursion
Course-of-Values Recursion是我认为最直观的⼀种技巧,就是将遍历过的结果当作返回值的⼀部分保留下来,在递归的时候可以取得运算过的值。
对于递归函数f,定义⼀个辅助的函数bar
则原递归函数f可以⽤bar表⽰出来:
斐波那契数列:
fibBar :: Int -> [Int]
fibBar 0 = [0]
fibBar 1 = 1:fibBar 0
fibBar n = let course = fibBar (n-1) in -- [fib(n-1)..fib(0)]
let p  = course !! 0 in -- fib(n-1)
let pp = course !! 1 in -- fib(n-2)
p + pp : course
-- >>> fibBar 10
-- [55,34,21,13,8,5,3,2,1,1,0]
--
Binary Partitions:
数的⼆次幂拆分⽅法有多少种,其状态转移⽅程为:
则:
bpBar n
| n == 0 = [1]
| even n = let course = bpBar (n-1) in
let pred = course !! 0 in                -- bp (n-1)
let half = course !! (n-1 - n `div` 2) in -- bp (n/2)
pred + half : course
| otherwise = let course = bpBar (n-1) in head course : course
-- >>> bpBar 20
-- [60,46,46,36,36,26,26,20,20,14,14,10,10,6,6,4,4,2,2,1,1]
--
但遗憾的是,其复杂度并不是O(n),因为每次都会索引链表,这很糟糕。
0-1背包问题:
其状态转移⽅程为:
这⾥也需要将这个n*W状态空间塞到course⾥:
则:
type Weight = Int
type Value = Double
type Items = [(Weight, Value)]
knapsack :: Items -> Weight -> Value
knapsack items capacity = head $ bar items capacity where
bar :: Items -> Weight -> [Value]
bar []            0 = [0]
bar (_:items)      0 = 0 : bar items capacity
bar []            y = 0 : bar []    (y-1)
bar ((w, v):items) y
| w <= y = let course = bar ((w, v):items) (y-1) in
let v1    = course !! capacity in        -- knapsack(i-1, y)
let v2    = course !! (capacity + w) in  -- knapsack(i-1, y-wi)
let new    = max v1 (v2 + v) in
new : course
| otherwise = let course = bar ((w, v):items) (y-1) in
course !! capacity : course
-- >>> knapsack [(2, 6.0), (2, 3.0), (6, 5.0), (5, 4.0), (4, 6.0)] 10
-- 15.0
--
CoV除了经常要索引链表意外还有其它限制,并⾮所有的递归函数都能转化为这种形式,⽐如阿克曼函数(Ackermann's function)。Streaming
这是专属于Haskell的优雅的⽅法。
斐波那契数列就是⼀个经典的例⼦:
fibs :: [Integer]
fibs = 1:1:zipWith (+) fibs (tail fibs)
-- >>> take 10 fibs
-- [1,1,2,3,5,8,13,21,34,55]
需要把fibs看作⼀个流,fibs前两个元素为1且剩余部分由其⾃⾝(fibs)与⾃⾝去⾸(tail fibs)合成(zipWith (+))。取“下⼀个”的时候才会计算,并且不会重复计算,这就是Haskell惰性求值的威⼒。
类似的,阶乘流由[1..]和其⾃⾝合成:
facts :: [Integer]
facts = 1 : zipWith (*) [1..] facts
-- >>> take 10 facts
-- [1,1,2,6,24,120,720,5040,40320,362880]
--
流还可以表达更复杂的问题,⽐如刚刚的Binary Partitions:
bps :: [Integer]
bps = 1:zipWith3 reduce
[1..]
bps
(tail $ dup bps) -- bp (n/2) 组成的流
where
dup xs = xs >>= x -> [x, x] -- 每个元素个数*2
reduce n a b
| even n    = a + b
| otherwise = a
-- >>> take 21 bps
-- [1,1,2,2,4,4,6,6,10,10,14,14,20,20,26,26,36,36,46,46,60]
--
可以发现顺序是和CoV的反过来的。同时,这⾥还没有了索引的开销,很O(n),很棒。
流的形式很优美,当然也很少的情况能将流写的很优美,毕竟算法本⾝就是丑陋的,⽐如刚刚0-1背包问题换成什么写法都只能写得这样丑陋(但这已经是很简单的算法了)。
Dynamorphism
dynamorphism是recursion schemes的⼀种,是anamorphism和histomorphism的组合,由anamorphism构造递归树,
由histomorphism完成记忆化和规约。
过程上可以看作是CoV的抽象,⾃动完成记忆化和递归,并且推⼴了course的结构(但⼀般还是⽤List)。
-- dynamorphism
dyna :: Functor f => (a -> f a) -> (f (Cofree f c) -> c)  -> a -> c
dyna phi psi a = let x :< _ =  h a in x where
h = uncurry (:<) . (a -> (psi a, a)) . fmap h . phi
-- dyna phi psi = histo psi . ana phi
Binary Partitions:
bpDyna :: Integer -> Integer
bpDyna = dyna phi psi where
phi 0 = Nil
phi n = Cons n (n - 2)
psi Nil            = 0
psi (Cons n course)
| even n = let pred = fromJust $ lookupCourse course 0 in
let half = fromJust $ lookupCourse course (n-1 - n `div` 2) in                pred + half
| otherwise = fromJust $ lookupCourse course 0
lookupCourse :: (Num n, Eq n) => Cofree (ListF e) a -> n -> Maybe a lookupCourse (x :< _)    0 = Just x
lookupCourse (_ :< hole) n =
case hole of
Nil        -> Nothing
Cons _ next -> lookupCourse next (n - 1)
-- >>> map bpDyna [0..20]
-- [1,1,2,2,4,4,6,6,10,10,14,14,20,20,26,26,36,36,46,46,60]
--
最长公共⼦序列:
也是⼀道经典的dp题,我也不再赘述内容,其状态转移⽅程为:
lcsDyna :: Eq a => [a] -> [a] -> [a]
lcsDyna as bs = dyna phi psi (as, bs) where
aslen = length as
phi    ([], [])  = Nil
lookup函数返回值不对
phi all@([], y:ys) = Cons all (as, ys)
phi all@(x:xs, ys) = Cons all (xs, ys)
psi Nil = []
psi (Cons ([], _) _) = []
psi (Cons (_, []) _) = []
psi (Cons (x:_, y:_) course)
| x == y = x : zs
| length xs > length ys = xs
| otherwise = ys
where
xs = fromJust $ lookupCourse course 0
ys = fromJust $ lookupCourse course aslen
zs = fromJust (lookupCourse course (aslen + 1))
0-1背包问题
knapsackDyna :: Items -> Weight -> Value
knapsackDyna goods w = dyna phi psi (goods, w) where
phi ([], 0)      = Nil
phi (wv:goods, 0) = Cons (wv:goods, 0) (goods, w)
phi (goods, w)    = Cons (goods, w) (goods, w - 1)
psi Nil              = 0
psi (Cons ([], _) _) = 0
psi (Cons ((w', v'):goods', rest) course)
| w' > rest  = fromJust $ lookupCourse course w  -- course[i-1][w]
| otherwise =
let v1 = fromJust $ lookupCourse course w
v2 = fromJust $ lookupCourse course (w' + w) -- course[i-1][w - w']
in  max v1 (v2 + v')
-- >>> knapsackDyna [(2, 6.0), (2, 3.0), (6, 5.0), (5, 4.0), (4, 6.0)] 10
-- 15.0
--
另外,这⾥的recursion schemes的库是直接在hackage⾥的。懒得⾃⼰写了。dynamorphisim的介绍,我可能会令写⼀篇⽂章吧(咕咕咕)。其实我觉得这也没减少什么复杂度。。。
Memocombinators
(更新于2020/5/17,就不新开⽔⽂了)
这是个⼗分简洁美观,⼏乎在各⽅⾯吊打上⾯⼏种写法。但这⽅法本⾝⼜挺tricky的,和haskell处理partial application的⽅式有关。
记忆化最直接的⽅式,便是维护⼀个表,⽤于缓存已计算的结果。⽐如:
-- 接受⼀个函数,返回其记忆化的版本
memoize :: (Int -> a) -> (Int -> a)
memoize f n = map f [0..] !! n
缓存的内容便是map f [0..]。我们希望该函数有记忆化的效果,那么斐波那契数列就可以写成这样:
fibMemo :: Int -> Integer
fibMemo = memoize fib where
fib 0 = 0
fib 1 = 1
fib n = fibMemo (n - 1) + fibMemo (n - 2)
-- 这不能调⽤fib本⾝,因为fib本⾝并没有通过memoize进⾏记忆化
但很遗憾,根本没有起到记忆化的效果。究其原因,当每次调⽤memoize的时候,都会重新计算⼀次map f [0..]。
没有记忆化,空间占⽤还巨⼤

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。