撰文|赵露阳

算子即Operator,这里简称op。op是深度学习的根底操作,任意深度学习框架中都蕴含了数百个op,这些op用于各种类型的数值、tensor运算。

在深度学习中,通过nn.Module这样搭积木的形式搭建网络,而op就是更根底的,用于制作积木的配方和原材料。

譬如如下的一个demo网络:

import oneflow as torch                  class TinyModel(torch.nn.Module):    def __init__(self):        super(TinyModel, self).__init__()        self.linear1 = torch.nn.Linear(100, 200)        self.activation = torch.nn.ReLU()        self.linear2 = torch.nn.Linear(200, 10)        self.softmax = torch.nn.Softmax()    def forward(self, x):        x = self.linear1(x)        x = self.activation(x)        x = self.linear2(x)        x = self.softmax(x)        return xtinymodel = TinyModel()print('The model:')print(tinymodel)

从构造来看,这个网络是由各种nn.Module如Linear、ReLU、Softmax搭建而成,但从实质上,这些nn.Module则是由一个个根底op拼接,从而实现性能的。这其中就蕴含了Matmul、Relu、Softmax等op。 在OneFlow中,对于一个已有op,是如何实现从Python层->C++层的调用、流转和执行过程?本文将以

output = flow.relu(input)

为例,梳理一个op从Python -> C++执行的残缺过程。

首先,这里给出一个流程示意图:

上面,将别离具体从源码角度跟踪其各个环节。

1

Binding

这里,binding是指Python和C++代码的绑定。通常,咱们用Python搭建网络,训练模型,调用函数实现各种操作。实际上,这些函数通常在Python层只是一层wrapper,底层实现还是通过C++代码实现的,那么Python -> C++是如何调用的?这就须要用到Python和C++的绑定。

在深度学习框架的实现中,即能够用Python原生的C API,也能够通过pybind11来实现函数绑定,在OneFlow中,二者均有应用,譬如:

  • oneflow/api/python/framework/tensor.cpp
  • oneflow/api/python/framework/tensor_functions.cpp

中波及到的 tensor.xxx 办法都是通过Python C API实现了函数绑定;

  • oneflow/core/functional/functional_api.yaml

中定义的诸多 flow.xxx 办法则是通过pybind实现的绑定。这里对于Python C API和pybind不做过多介绍,具体用法能够参考相应文档:

  • https://docs.python.org/zh-cn...
  • https://pybind11.readthedocs....

上面咱们回到flow.relu办法,咱们在Python层调用的flow.relu理论是调用了在

python/oneflow/__init__.py

中定义的oneflow._C.relu。 _C示意其实现位于底层C++。和PyTorch相似,咱们也基于.yaml定义了一套接口导出及code gen的规定,譬如在 functional_api.yaml 中,咱们能够看到Relu的导出接口的函数签名:

- name: "relu"  signature: "Tensor (Tensor x, Bool inplace=False) => Relu"  bind_python: True

从yaml定义能够看出,flow._C.relu 接管两个参数,tensor和一个bool值,其绑定了C++的Relu办法,函数返回值也是tensor。实际上,在OneFlow编译时,会通过执行

tools/functional/generate_functional_api.py

这个文件,对 functional_api.yaml 进行解析和代码生成,动静生成C++的.h和.cpp文件。

  • build/oneflow/core/functional/functional_api.yaml.h
  • build/oneflow/core/functional/functional_api.yaml.cpp

并在.cpp文件中调用相应的functor实现C++层面的函数调用。这里,还是以flow._C.relu为例,其对应的functor定义位于
oneflow/core/functional/impl/activation_functor.cpp:

class ReluFunctor { public:  ReluFunctor() { op_ = CHECK_JUST(one::OpBuilder("relu").Input("x", 1).Output("y", 1).Build()); }  Maybe<Tensor> operator()(const std::shared_ptr<Tensor>& x, bool inplace) const {    ...  } private:  std::shared_ptr<OpExpr> op_;};

ReluFunctor通过

ONEFLOW_FUNCTION_LIBRARY(m) {  m.add_functor<impl::ReluFunctor>("Relu");  ...}

实现functor的注册,注册成functional接口后,在Python层flow._C.relu就实现了和“Relu”的绑定。同时,这个函数在C++中也能够通过functional::Relu间接调用。

2

Functor

Functor不仅是Python -> C++交互的外围,也是op调用、输出参数推导和查看的第一站。通常,各种op在functor层须要实现对输出tensor的shape、dtype、维度、元素个数等各种check,以及对op特有的逻辑进行解析和解决。Relu Functor代码如下:

class ReluFunctor { public:  ReluFunctor() { op_ = CHECK_JUST(one::OpBuilder("relu").Input("x", 1).Output("y", 1).Build()); }  Maybe<Tensor> operator()(const std::shared_ptr<Tensor>& x, bool inplace) const {    if (inplace) {      JUST(CheckInplaceValid(x));      std::shared_ptr<TensorTuple> outputs = std::make_shared<TensorTuple>(1);      outputs->at(0) = x;      JUST(OpInterpUtil::Dispatch(*op_, {x}, outputs.get(), AttrMap{}));      return outputs->at(0);    } else {      return OpInterpUtil::Dispatch<Tensor>(*op_, {x});    }  } private:  std::shared_ptr<OpExpr> op_;};

能够看见,ReluFunctor是比较简单的,其定义了一个公有变量

std::shared_ptr<OpExpr> op_;

这个op_即须要执行的Relu op,通过OpBuilder进行构建;functor的operator()外部,依据是否inplace走到2个不同分支,并最终通过OpInterpUtil::Dispatch()将op、输出tensor和参数派发至Interpreter解决。

3

Dispatch

各种op在functor中实现check和逻辑解决后,大多须要通过OpInterpUtil::Dispatch() 进行派发,其目的地是Interpreter。在Interpreter中,将会对op进行更进一步的解决。在
oneflow/core/framework/op_interpreter/op_interpreter_util.h 中,咱们能够看见多种重载的Dispatch模板代码:

class OpInterpUtil { public:  template<typename T>  static Maybe<T> Dispatch(const OpExpr& op_expr, const TensorTuple& inputs, const AttrMap& attrs) {    return Dispatch<T>(op_expr, inputs, OpExprInterpContext(attrs));  }  template<typename T>  static Maybe<T> Dispatch(const OpExpr& op_expr, const TensorTuple& inputs) {    return Dispatch<T>(op_expr, inputs, OpExprInterpContext(AttrMap{}));  }  template<typename T>  static Maybe<T> Dispatch(const OpExpr& op_expr, const TensorTuple& inputs,                           const OpExprInterpContext& ctx);  static Maybe<void> Dispatch(const OpExpr& op_expr, const TensorTuple& inputs,                              TensorTuple* outputs, const AttrMap& attrs) {    return Dispatch(op_expr, inputs, outputs, OpExprInterpContext(attrs));  }  static Maybe<void> Dispatch(const OpExpr& op_expr, const TensorTuple& inputs,                              TensorTuple* outputs) {    return Dispatch(op_expr, inputs, outputs, OpExprInterpContext(AttrMap{}));  }  static Maybe<void> Dispatch(const OpExpr& op_expr, const TensorTuple& inputs,                              TensorTuple* outputs, const OpExprInterpContext& ctx);

这些重载,是为了应答不同的输出、输入以及OpExprInterpContext的状况。譬如这个OpExprInterpContext是op在Interpreter中所需的上下文,可能携带op计算所须要的属性(如conv2d op所须要的kernel_size、padding等)、device、sbp、parallel等形容信息。这些重载的Dispatch最终都会走到:

/* static */ Maybe<void> OpInterpUtil::Dispatch(    const OpExpr& op_expr,     const TensorTuple& inputs,                 TensorTuple* outputs,    const OpExprInterpContext& ctx) {  return JUST(GetInterpreter(inputs, ctx, op_expr))->Apply(op_expr, inputs, outputs, ctx);}

Dispatch至此,剩下的就要交给Interpreter了。

4

Interpreter

Get Interpreter

这里先看看GetInterpreter,这里其实就是获取所需的Interpreter,来负责op接下来的执行。省略check相干的逻辑,次要代码如下:
oneflow/core/framework/op_interpreter/op_interpreter_util.cpp

Maybe<AutogradInterpreter> GetInterpreter(const TensorTuple& inputs, const OpExprInterpContext& ctx,                                          const OpExpr& op_expr) {  static const auto& g_lazy_interpreter = BuildLazyInterpreter();  static const auto& g_eager_consistent_interpreter = BuildEagerInterpreter(/*is_mirrored=*/false);  static const auto& g_eager_mirrored_interpreter = BuildEagerInterpreter(/*is_mirrored=*/true);  if (!LazyMode::is_enabled()) {    if (inputs.empty()) {      if (ctx.parallel_desc.has_value()) {        JUST(ctx.nd_sbp);        CHECK_OR_RETURN(!ctx.device.has_value());        return g_eager_consistent_interpreter;      } else {        CHECK_OR_RETURN(!ctx.nd_sbp.has_value());        return g_eager_mirrored_interpreter;      }    } else {      if (inputs.at(0)->is_consistent()) {        ...        return g_eager_consistent_interpreter;      } else {        ...        return g_eager_mirrored_interpreter;      }    }    UNIMPLEMENTED_THEN_RETURN();  }  return g_lazy_interpreter;}

通过下面的逻辑能够看出,Interpreter大体上分为Eager Interpteter和Lazy Interpreter;其中Eager Interpteter又依据Eager Mirrored和Eager Consistent有所区别。具体就是以下3种子类实现:

  • EagerMirroredInterpreter
  • EagerConsistentInterpreter
  • LazyInterpreter

一般的Eager mode下(无论是单卡还是DDP的状况)都会走到 EagerMirroredInterpreter 的逻辑;在一般Eager Mode之外,为输出tensor设置了sbp、placement则会进入到
EagerConsistentInterpreter的逻辑;在Lazy Mode时(应用nn.Graph),则会进入到LazyInterpreter

上面,咱们看下这3种Interpreter的构建:

std::shared_ptr<AutogradInterpreter> BuildEagerInterpreter(const bool& is_mirrored) {  std::shared_ptr<OpExprInterpreter> internal;  if (is_mirrored) {    internal = std::make_shared<EagerMirroredInterpreter>();  } else {    internal = std::make_shared<EagerConsistentInterpreter>();  }  return std::make_shared<AutogradInterpreter>(internal);}std::shared_ptr<AutogradInterpreter> BuildLazyInterpreter() {  auto internal = std::make_shared<LazyInterpreter>();  return std::make_shared<AutogradInterpreter>(internal);}

可见,这3种Interpreter构建实现后,都会以公有变量internal的模式,参加AutogradInterpreter的构建,并最终返回AutogradInterpreter

class AutogradInterpreter { public:  AutogradInterpreter() = delete;  AutogradInterpreter(const std::shared_ptr<OpExprInterpreter>& internal) : internal_(internal) {}  virtual ~AutogradInterpreter() = default;  Maybe<void> Apply(const OpExpr& op_expr, const TensorTuple& inputs, TensorTuple* outputs,                    const AttrMap& attrs) const {    return Apply(op_expr, inputs, outputs, OpExprInterpContext(attrs));  }  Maybe<void> Apply(const OpExpr& op_expr, const TensorTuple& inputs, TensorTuple* outputs) const {    return Apply(op_expr, inputs, outputs, OpExprInterpContext(AttrMap{}));  }  Maybe<void> Apply(const OpExpr& op_expr, const TensorTuple& inputs, TensorTuple* outputs,                    const OpExprInterpContext& ctx) const; private:  std::shared_ptr<OpExprInterpreter> internal_;};

Apply()

通过下面咱们晓得,EagerMirroredInterpreter
EagerConsistentInterpreterLazyInterpreter都将为其包裹上AutogradInterpreter的壳,通过AutogradInterpreter触发Apply的调用。顾名思义,AutogradInterpreter的作用次要是和autograd相干,其次要为eager mode下前向的op节点插入对应的用于反向计算grad的节点。

咱们看看这部分代码,要害局部的作用在正文里给出:

Maybe<void> AutogradInterpreter::Apply(const OpExpr& op_expr, const TensorTuple& inputs,                                       TensorTuple* outputs, const OpExprInterpContext& ctx) const {  // 判断是否须要计算梯度,如果处于GradMode的作用域切改op注册时没有禁用梯度  // 则requires_grad的值依据输出tensor的requires_grad属性判断  // any of input tensors requires_grad==True,则示意须要计算梯度  bool requires_grad = false;  if (autograd::GradMode::is_enabled() && !JUST(op_expr.IsGradDisabled())) {    requires_grad =        std::any_of(inputs.begin(), inputs.end(),                    [](const std::shared_ptr<Tensor>& tensor) { return tensor->requires_grad(); });  }// 这一坨逻辑比拟俊俏,是因为近期反对了oneflow零碎中反对了stride&&view机制// 而大部分op尚未注册stride推导、尚未反对non-contiguous的输出tensor// 所以须要在这对这部分op的输出进行强制转换,将其变为contiguous的// NOTE: if this op not support stride, then need to tensor->contiguous()#define HANDLE_NON_CONTIGUOUS_INPUT(tensor_tuple_ptr)                                       \  TensorTuple tmp_inputs;                                                                   \  if (!LazyMode::is_enabled() && !JUST(op_expr.SupportNonContiguous())) {                   \    tmp_inputs.resize(inputs.size());                                                       \    for (size_t i = 0; i < inputs.size(); i++) { tmp_inputs[i] = inputs[i]->contiguous(); } \    tensor_tuple_ptr = &tmp_inputs;                                                         \  }  const TensorTuple* inputs_ptr = &inputs;  HANDLE_NON_CONTIGUOUS_INPUT(inputs_ptr);  // 这里是进行理论Interpreter执行的次要过程  {    autograd::AutoGradMode mode(false);    JUST(internal_->Apply(op_expr, *inputs_ptr, outputs, ctx));  }  // 这里次要是为了eager mode下,且requires_grad==True的op,  // 插入反向节点(AddNode)用于autograd,该节点蕴含反向梯度计算的办法(backward_fn)  // Lazy mode will construct backward compute graph in passes, so disable autograd if lazy mode.  std::shared_ptr<OpExprGradClosure> grad_closure(nullptr);  if (requires_grad && !LazyMode::is_enabled()) {    grad_closure = JUST(op_expr.GetOrCreateOpGradClosure());    auto backward_fn = std::make_shared<BackwardFunction>();    backward_fn->body = [=](const TensorTuple& out_grads, TensorTuple* in_grads,                            bool create_graph) -> Maybe<void> {      autograd::AutoGradMode mode(create_graph);      JUST(grad_closure->Apply(out_grads, in_grads));      return Maybe<void>::Ok();    };    backward_fn->status = [=]() { return grad_closure->state()->SavedTensors().size() > 0; };    JUST(GetThreadLocalAutogradEngine()->AddNode(op_expr.op_type_name() + "_backward", backward_fn,                                                 *inputs_ptr, outputs));  }  // Update outputs autograd meta  // Note: if requires_grad is True, we will create a new autograd meta for each output  // in `AddBackwardFuncPtr` to support inplace operation, so the update should after  // `AddBackwardFuncPtr`  for (auto& output : *outputs) {    output->set_is_leaf(inputs_ptr->size() == 0 || !requires_grad);    ...    if (!output->requires_grad()) {      JUST(output->set_requires_grad(          requires_grad && IsSupportRequireGradDataType(output->dtype()->data_type())));    }  }  // 捕捉前向的inputs outputs,反向计算时可能用到  if (requires_grad && !LazyMode::is_enabled()) {    // Capture inputs and outputs after `AddBackwardFuncPtr` because of that grad function    // node has been attached to them.    JUST(grad_closure->Capture(*inputs_ptr, *outputs, ctx));  }  return Maybe<void>::Ok();}

下面一坨逻辑有点多,让咱们看一下重点,对于简略的Relu op,咱们只需关注这部分代码:

// 这里是进行理论Interpreter执行的次要过程  {    autograd::AutoGradMode mode(false);    JUST(internal_->Apply(op_expr, *inputs_ptr, outputs, ctx));  }

这里,还是以下面的flow.relu为例,因为是简略的Eager Mode,所以理论会走到EagerInterpreter的Apply办法:

Maybe<void> EagerInterpreter::Apply(const OpExpr& op_expr, const TensorTuple& inputs,                                    TensorTuple* outputs, const OpExprInterpContext& ctx) const {#define APPLY_IF(op_type)                                              \  if (const auto* op = dynamic_cast<const op_type##Expr*>(&op_expr)) { \    return ApplyImpl(*op, inputs, outputs, ctx);                       \  }  APPLY_IF(UserOp);  APPLY_IF(VariableOp);  APPLY_IF(CastToMirroredOp);  APPLY_IF(CastFromMirroredOp);  APPLY_IF(ConsistentToConsistentOp);  APPLY_IF(CastToConsistentOp);  APPLY_IF(CastFromConsistentOp);  APPLY_IF(DistributeSplitOp);  APPLY_IF(DistributeCloneOp);  APPLY_IF(DistributeConcatOp);  APPLY_IF(DistributeAddOp);  APPLY_IF(FunctionOp);  APPLY_IF(SelectTopNOp)#undef APPLY_IF  OF_UNIMPLEMENTED() << "The type " << op_expr.op_type_name()                     << " has not been supported in EagerInterpreter::Apply.";}

这里,通过宏定义APPLY_IF,减少了对不同类型op的分支解决。对于大多数用户来说,用到的op都是UserOp类型,所以这里实际上会走到这个分支中:

if (const auto* op = dynamic_cast<const UserOpExpr*>(&op_expr)) {    return ApplyImpl(*op, inputs, outputs, ctx);  }

再看看EagerMirroredInterpreter::ApplyImpl,位于
oneflow/core/framework/op_interpreter/eager_mirrored_op_interpreter.cpp:

Maybe<void> EagerMirroredInterpreter::ApplyImpl(const UserOpExpr& op_expr,                                                const TensorTuple& inputs, TensorTuple* outputs,                                                const OpExprInterpContext& ctx) const {  return NaiveInterpret(op_expr, inputs, outputs, ctx);}

其最终实现是NaiveInterpret。

NaiveInterpret

NaiveInterpret简略来说,次要用于做以下几件事:

  • check input tensor的device是否统一
  • 生成output tensor
  • 为output tensor推导和查看shape/stride/dtype
  • 构建op执行指令,并派发至vm

简化版的代码如下:

Maybe<void> NaiveInterpret(const UserOpExpr& user_op_expr, const TensorTuple& inputs,                           const Symbol<Device>& default_device, TensorTuple* outputs,                           const OpExprInterpContext& ctx) {  const auto& attrs = ctx.attrs;  std::shared_ptr<EagerBlobObjectList> input_eager_blob_objects =      std::make_shared<EagerBlobObjectList>(inputs.size());  // check devices  for (int i = 0; i < inputs.size(); i++) {    const auto& input_device = JUST(inputs.at(i)->device());    if (i > 0) {      CHECK_OR_RETURN(*default_device == *input_device)          << Error::RuntimeError()          << "Expected all tensors to be on the same device, but found at least two devices, "          << default_device->ToString() << " (positional 0) and " << input_device->ToString()          << " (positional " << i << ")!";    }    input_eager_blob_objects->at(i) = JUST(inputs.at(i)->eager_blob_object());  }  // make output tensors  std::shared_ptr<EagerBlobObjectList> output_eager_blob_objects =      std::make_shared<EagerBlobObjectList>(outputs->size());  auto* output_tensor_metas = ThreadLocalDefaultOutputMutTensorMetas(outputs->size());  for (int i = 0; i < outputs->size(); i++) {    if (!outputs->at(i)) {      const auto& tensor_impl = std::make_shared<EagerMirroredTensorImpl>();      outputs->at(i) = std::make_shared<MirroredTensor>(tensor_impl);      output_tensor_metas->at(i) = tensor_impl->mut_tensor_meta();    } else {      bool has_eager_blob_object = JUST(outputs->at(i)->has_eager_blob_object());      CHECK_OR_RETURN(has_eager_blob_object);      output_eager_blob_objects->at(i) = JUST(outputs->at(i)->eager_blob_object());    }  }  Symbol<Stream> stream;  bool need_check_mem_case = true;  // Infer devices  ...  // Infer shapes strides dtype  ...  // 构建op执行指令,并派发至vm  JUST(PhysicalRun([&](InstructionsBuilder* builder) -> Maybe<void> {    return builder->LocalCallOpKernel(kernel, input_eager_blob_objects, output_eager_blob_objects,                                      ctx, stream);  }));  return Maybe<void>::Ok();}

Interpreter的起点是虚拟机(vm)。vm局部,是OneFlow比拟独特的设计,内容很多,这里暂不开展了:) 能够简略了解,派发至vm后,此op将进入一个工作执行的队列,将会期待其vm的调度、执行。

5

Compute

在Interpreter将op执行指令派发至vm后,通过调度逻辑解决后,将会在

oneflow/core/eager/opkernel_instruction_type.cpp

被触发执行,外围代码如下:

static inline void OpKernelCompute(    LocalCallOpKernelPhyInstrOperand* operand,    DeviceCtx* device_ctx, user_op::OpKernelState* state,    const user_op::OpKernelCache* cache) {    auto* opkernel = operand->mut_opkernel();    auto* compute_ctx =        opkernel->UpdateComputeContext(operand->inputs().get(), operand->outputs().get(),                                       operand->consistent_tensor_infer_result().get(), device_ctx);    ...    operand->user_opkernel()->Compute(compute_ctx, state, cache);    opkernel->UpdateComputeContext(nullptr, nullptr, nullptr, nullptr);}

其中,

operand->user_opkernel()->Compute(compute_ctx, state, cache);

将触发op kernel的理论执行。通常来说,op的kernel实现依据device的不同,会派发到不同的实现,其个别都位于:

oneflow/user/kernels/xxx_kernel.cpp

oneflow/user/kernels/xxx_kernel.cu

这里的Relu op绝对比拟非凡,是用primitive实现的(primitive也是oneflow中一种独特的设计,有着良好的形象和可组合性),具体这个UnaryPrimitive就是elementwise unary的模板+UnaryFunctor的组合。其调用链如下:

UnaryPrimitiveKernel

class UnaryPrimitiveKernel final : public user_op::OpKernel, public user_op::CudaGraphSupport { public:  OF_DISALLOW_COPY_AND_MOVE(UnaryPrimitiveKernel);  UnaryPrimitiveKernel() = default;  ~UnaryPrimitiveKernel() = default;  using PrimitiveFactoryFuncType = std::function<std::unique_ptr<ep::primitive::ElementwiseUnary>(      user_op::KernelComputeContext*)>;  UnaryPrimitiveKernel(const std::string& output_name, const std::string& input_name,                       PrimitiveFactoryFuncType fn)      : output_name_(output_name),        input_name_(input_name),        primitive_factory_func_(std::move(fn)) {} private:  using user_op::OpKernel::Compute;  void Compute(user_op::KernelComputeContext* ctx) const override {    auto primitive = primitive_factory_func_(ctx);    CHECK(primitive);    const user_op::Tensor* input_tensor = ctx->Tensor4ArgNameAndIndex(input_name_, 0);    ...    const int64_t elem_cnt = input_shape.elem_cnt();    if (elem_cnt != 0) {      primitive->Launch(ctx->stream(), input_tensor->dptr(), output_tensor->mut_dptr(), elem_cnt);    }  }  bool AlwaysComputeWhenAllOutputsEmpty() const override { return false; }  std::string output_name_;  std::string input_name_;  PrimitiveFactoryFuncType primitive_factory_func_;};

ep::primitive::ElementwiseUnary

template<UnaryOp unary_op, typename Src, typename Dst>class ElementwiseUnaryImpl : public ElementwiseUnary { public:  OF_DISALLOW_COPY_AND_MOVE(ElementwiseUnaryImpl);  ElementwiseUnaryImpl(Scalar attr0, Scalar attr1) : attr0(attr0), attr1(attr1) {}  ~ElementwiseUnaryImpl() override = default;  void Launch(Stream* stream, const void* src_ptr, void* dst_ptr, size_t count) override {    CpuStream* cpu_stream = stream->As<CpuStream>();    Dst* dst = reinterpret_cast<Dst*>(dst_ptr);    const Src* src = reinterpret_cast<const Src*>(src_ptr);    auto functor = UnaryFunctor<DeviceType::kCPU, unary_op, Dst, Src>(attr0, attr1);    cpu_stream->ParallelFor(0, count, [functor, src, dst](int64_t begin, int64_t end) {      for (int64_t i = begin; i < end; i++) { dst[i] = functor(src[i]); }    });  } protected:  Scalar attr0, attr1;};

UnaryFunctor

这个UnaryFuntor依据不同的Unaray op类型,特化出不同的具体functor实现,具体到Relu op,其实现位于
oneflow/core/ep/common/primitive/unary_functor.h:

template<DeviceType device, typename Dst, typename Src>struct UnaryFunctor<device, UnaryOp::kRelu, Dst, Src> {  UnaryFunctor(Scalar attr0, Scalar attr1) {}  OF_DEVICE_FUNC Dst operator()(Src src) const {    const Src zero_val = static_cast<Src>(0.0);    if (src <= zero_val) {      return static_cast<Dst>(zero_val);    } else {      return static_cast<Dst>(src);    }  }};

至此,咱们曾经实现了一个op的Python -> C++ 之旅。从细节上看,是绝对简单的,但从整体流程上看,其实是比较简单的,排除了binding,vm调度机制等细节,其次要过程其实就4个环节: Functor -> Dispatch -> Interpreter -> Kernel Compute

实现/新增一个op,通常也不须要管两头的Dispatch以及Interpreter,咱们只需重点关注和该op强相干的局部——Functor层面的参数、op逻辑查看,以及Kernel Compute局部的理论op运算。

(参考代码:https://github.com/Oneflow-In...)

欢送下载体验 OneFlow v0.7.0 最新版本:
https://github.com/Oneflow-In...