迈向机器学习的一小步:基于 Taichi 自动微分的“魔法喷泉”
可微分编程在科学计算与人工智能领域都有广泛的应用,自动微分(Automatic Differentiation)是一种对指定程序自动求导的技术,是实现可微分编程的重要工具。目前,Taichi Lang 支持反向模式自动微分,允许用户在 Taichi kernel 中编写可微分代码。
02 仿真器
03 神经网络
04 控制器
05 开始训练!
pos = ti.Vector.field(3,
float)
vel = ti.Vector.field(3,
float)
acc = ti.Vector.field(3,
float)
den = ti.field(
float)
pre = ti.field(
float)
ti.root.dense(ti.ijk, (batch_size, steps, particle_num)).place(pos, vel, acc, den, pre)
*严格来说,线性层中不应该包含非线性激活函数,但是为了简化问题,我们的实现中包含了非线性激活函数。
@ti.data_oriented
classSGD:
def__init__(self, params, lr)
:
self
.params = params
self
.lr = lr
defstep(self)
:
for
w
inself.
params:self
._step(w)
@ti.kernel
def_step(self, w: ti.template()
):
for
I
in ti.grouped(w):
w[I] -= min(max(w.grad[I], -
20.0),
20.0) *
self.lr
defzero_grad(self)
:
for
w
inself.
params: w.grad.fill(
0.
0)
*请注意,如果仿真展开步数过长会造成梯度爆炸,我们应用了梯度裁剪来缓解这一问题。
...
loss = ti.field(float, shape=(), needs_grad=True)
input_states = ti.field(float, shape=(model_num, steps, batch_size, n_input), needs_grad=True)
# Construct the fully-connected layers
fc1 = Linear(n_models=model_num,
batch_size=batch_size,
n_steps=steps, n_input=n_input, n_hidden=n_hidden,
n_output=n_output, needs_grad=True, activation=False)
fc2 = Linear(n_models=model_num,
batch_size=batch_size,
n_steps=steps, n_input=n_output, n_hidden=n_hidden,
n_output=n_output_act, needs_grad=True, activation=True)
fc1.weights_init()
fc2.weights_init()
# Feed trainable parameters to the optimizer
NNs = [fc1, fc2]
parameters = []
for layer in NNs:
parameters.extend(layer.parameters())
optimizer = SGD(params=parameters, lr=learning_rate)
...
for opt_iter in range(opt_iters):
...
for current_data_offset in range(0, training_sample_num, batch_size):
...
with ti.Tape(loss=loss):
for i in range(1, steps):
initialize_density(i - 1)
update_density(i - 1)
update_pressure(i - 1)
# Apply the NN based controller
fc1.forward(i - 1, input_states)
fc2.forward(i - 1, fc1.output)
controller_output(i - 1)
apply_force(i - 1)
update_force(i - 1)
advance(i)
boundary_handle(i)
compute_dist(i)
compute_loss(steps - 1)
optimizer.step()
...
最新评论
推荐文章
作者最新文章
你可能感兴趣的文章
Copyright Disclaimer: The copyright of contents (including texts, images, videos and audios) posted above belong to the User who shared or the third-party website which the User shared from. If you found your copyright have been infringed, please send a DMCA takedown notice to [email protected]. For more detail of the source, please click on the button "Read Original Post" below. For other communications, please send to [email protected].
版权声明:以上内容为用户推荐收藏至CareerEngine平台,其内容(含文字、图片、视频、音频等)及知识版权均属用户或用户转发自的第三方网站,如涉嫌侵权,请通知[email protected]进行信息删除。如需查看信息来源,请点击“查看原文”。如需洽谈其它事宜,请联系[email protected]。
版权声明:以上内容为用户推荐收藏至CareerEngine平台,其内容(含文字、图片、视频、音频等)及知识版权均属用户或用户转发自的第三方网站,如涉嫌侵权,请通知[email protected]进行信息删除。如需查看信息来源,请点击“查看原文”。如需洽谈其它事宜,请联系[email protected]。