帧率是什么
FPS 是图像畛域中的定义,是指画面每秒传输帧数,艰深来讲就是指动画或视频的画面数
帧率的影响是什么
咱们用华为解读的《软件绿色联盟利用体验规范 3.0》原话来解释:利用界面的刷新帧率,尤其是滑动时,如果帧率较低,会带来卡顿的感觉,所以放弃一个绝对较高的帧率会带来更晦涩的体验。还有刷新频率越低,图像闪动和抖动的就越厉害,眼睛疲劳得就越快。
那么影响帧率的是什么呢
当初的 Android 旗舰机都做到了 120HzZ 的刷新率,但实际上,咱们须要这么高的刷新率吗?其实咱们对帧率的要求,不同的场景要求不太一样,电影做到 24Hz 就能够失常观看了,游戏的话须要最低 30Hz,想要晦涩的感觉就须要 >60Hz,这也是为啥王者光荣要达到 60Hz 左右才不会卡顿的起因。那么反过来再想,到底是什么在影响帧率呢?
- 显卡,FPS 越高则须要的显卡能力越高
- 分辨率,分辨率越低,越容易做到高帧率
有个公式,提供了帧率的计算方法:显卡的解决能力 = 分辨率×帧率,举个例子:分辨率是 1024×768 帧率是 24 帧 / 秒,那么须要:
1024×768×24=18874368 一千八百八十七万个像素量 / 秒 的显卡解决能力,想达到 50HZ 就须要三千九百万的像素解决能力。
帧率多少属于失常呢
软件绿色联盟利用体验规范 3.0 - 性能规范中提到如下规范:
- 一般利用的帧率应≥55fps
- 游戏类、地图类和视频类的帧率应≥25fps
游戏类不是达到 60Hz 才感觉晦涩吗,看到绿色联盟对游戏要求倒是很低,哈哈。
如何监控 Android 手机的帧率呢?
既然放弃高帧率对用户体验如此重要,那么咱们如何实时监控手机的帧率呢?一种最简略的形式就是手机外面的开发者模式中,关上 HWHI 出现模式分析,抉择屏幕条形图,或者 adb shell 输入都行。具体介绍在这里:
应用 GPU 渲染模式分析工具进行剖析
但这如同也不是咱们这次关怀的重点,咱们更想从代码的角度来剖析,那么如何实现呢?答案就是 Choreographer,谷歌在 Android API 16 中提供了这个类,它会在每一帧绘制之前通过 FrameCallBack 接口的形式回调 doFrame 办法,且提供了以后帧开始绘制的工夫(单位:纳秒),代码如下:
/**
* Implement this interface to receive a callback when a new display frame is
* being rendered. The callback is invoked on the {@link Looper} thread to
* which the {@link Choreographer} is attached.
*/
public interface FrameCallback {
/**
* Called when a new display frame is being rendered.
* <p>
* This method provides the time in nanoseconds when the frame started being rendered.
* The frame time provides a stable time base for synchronizing animations
* and drawing. It should be used instead of {@link SystemClock#uptimeMillis()}
* or {@link System#nanoTime()} for animations and drawing in the UI. Using the frame
* time helps to reduce inter-frame jitter because the frame time is fixed at the time
* the frame was scheduled to start, regardless of when the animations or drawing
* callback actually runs. All callbacks that run as part of rendering a frame will
* observe the same frame time so using the frame time also helps to synchronize effects
* that are performed by different callbacks.
* </p><p>
* Please note that the framework already takes care to process animations and
* drawing using the frame time as a stable time base. Most applications should
* not need to use the frame time information directly.
* </p>
*
* @param frameTimeNanos The time in nanoseconds when the frame started being rendered,
* in the {@link System#nanoTime()} timebase. Divide this value by {@code 1000000}
* to convert it to the {@link SystemClock#uptimeMillis()} time base.
*/
public void doFrame(long frameTimeNanos);
}
通过正文就发现,的确如上形容的那样。咱们再深挖一下,到底是什么能源促使有了 doFrame 的回调呢?而后再想一想如何计算出帧率。
Choreographer 背地的能源
这就须要理解 Android 的渲染原理,Android 的渲染也是通过 Google 长期的迭代,一直优化更新,其实整个渲染过程很简单,须要很多框架的反对,咱们晓得的的底层库如:Skia 或者 OpenGL,Flutter 就应用了 Skia 绘制,但它是 2D 的,且应用的是 CPU,OpenGL 能够绘制 3D,且应用的是 GPU,而最终出现图形的是 Surface。所有的元素都在 Surface 这张画纸上进行绘制和渲染,每个窗口 Window 都会关联一个 Surface,WindowManager 则负责管理这些 Window 窗口,并且把它们的数据传递给 SurfaceFlinger,SurfaceFlinger 承受缓冲区,对它们进行合成,而后发送到屏幕。
WindowManager 为 SurfaceFlinger 提供缓冲区和窗口元数据,而 SurfaceFlinger 通过硬件合成器 Hardware Composer 合成并输入到显示屏,Surface 会被绘制很多层,所以就是下面提到的缓冲区,在 Android 4.1 之前应用的是双缓冲机制;在 Android 4.1 之后,应用的是三缓冲机制。
然而这还没完,Google 在 2012 年的 I/O 大会上发表了 Project Butter 黄油打算,并且在 Android 4.1 中正式开启了这个机制,也就是 VSYNC 信号,这是个什么货色呢?先来看一张图,你会发现一次屏幕的绘制,须要通过 CPU 计算,而后 GPU,最初 Display,VSYNC 像是一个队列(生产者消费者模型),一个个随着工夫累加,一个个又立即输入后被生产,咱们晓得最终的 Buffer 数据是通过 SurfaceFlinger 输入到显示屏的,而 VSYNC 在这期间起到的作用其实就是有序布局渲染流程,升高延时,而咱们还晓得一个 VSYNC 的工夫距离是 16ms,超过 16 就会导致页面绘制停留在上一帧画面,从而感觉像是掉帧
为什么要 16ms 呢?这起源一个计算公式:
1000ms/60fps 约等于 16ms/1fps,而 Choreographer 的 doFrame 办法失常状况下就是 16ms 一次回调。且它的能源就是来源于这个 VSYNC 信号。
怎么通过 Choreographer 计算帧率
因为 doFrame 办法失常是 16ms 被调一次,那么咱们就能够顺着这个特点来做个计算,看段代码来梳理下整个思路:
// 记录上次的帧工夫
private long mLastFrameTime;
Choreographer.getInstance().postFrameCallback(new Choreographer.FrameCallback() {
@Override
public void doFrame(long frameTimeNanos) {
// 每 500 毫秒从新赋值一次最新的帧工夫
if (mLastFrameTime == 0) {mLastFrameTime = frameTimeNanos;}
// 本次帧开始工夫减去上次的工夫除以 100 万,失去毫秒的差值
float diff = (frameTimeNanos - mLastFrameTime) / 1000000.0f;
// 这里是 500 毫秒输入一次帧率
if (diff > 500) {double fps = (((double) (mFrameCount * 1000L)) / diff);
mFrameCount = 0;
mLastFrameTime = 0;
Log.d("doFrame", "doFrame:" + fps);
} else {++mFrameCount;}
// 注册监听下一次 vsync 信号
Choreographer.getInstance().postFrameCallback(this);
}
});
为什么要按 500 毫秒计算呢?其实也能够用一秒来算哈,你本人决定,总之 doFrame 办法,如果在一秒内被回调 60 次左右,那就根本失常哦。好了晓得了如何用代码测算帧率,那么咱们就来开始剖析,Matrix 的帧率检测代码。看看他都做了哪些。
Matrix 帧率检测代码剖析
帧率检测的代码必定是在 trace canary 中,首先来看下整体目录,咱们发现了 ITracer 形象
分包有,AnrTracer、EvilMethodTracer、FrameTracer、StartupTracer 四个,看名字应该就能判断出 FrameTracer 必定和帧率相干,于是咱们关上 FrameTracer,并检索 fps 字段,发现如下代码:
FrameTracer 类中的公有类 FrameCollectItem
private class FrameCollectItem {
long sumFrameCost;
int sumFrame = 0;
void report() {
// 这里计算帧率,1000.f * sumFrame / sumFrameCost 公式和咱们之前的
// double fps = (((double) (mFrameCount * 1000L)) / diff) 是不是有殊途同归之处?// sumFrameCost 应该是动静的时间差常量,能够是 500 毫秒也能够是 1 秒。float fps = Math.min(60.f, 1000.f * sumFrame / sumFrameCost);
MatrixLog.i(TAG, "[report] FPS:%s %s", fps, toString());
try {
// 这里就是生成 Json 报告,就不看了。TracePlugin plugin = Matrix.with().getPluginByClass(TracePlugin.class);
if (null == plugin) {return;}
JSONObject dropLevelObject = new JSONObject();
dropLevelObject.put(DropStatus.DROPPED_FROZEN.name(), dropLevel[DropStatus.DROPPED_FROZEN.index]);
dropLevelObject.put(DropStatus.DROPPED_HIGH.name(), dropLevel[DropStatus.DROPPED_HIGH.index]);
dropLevelObject.put(DropStatus.DROPPED_MIDDLE.name(), dropLevel[DropStatus.DROPPED_MIDDLE.index]);
dropLevelObject.put(DropStatus.DROPPED_NORMAL.name(), dropLevel[DropStatus.DROPPED_NORMAL.index]);
dropLevelObject.put(DropStatus.DROPPED_BEST.name(), dropLevel[DropStatus.DROPPED_BEST.index]);
JSONObject dropSumObject = new JSONObject();
dropSumObject.put(DropStatus.DROPPED_FROZEN.name(), dropSum[DropStatus.DROPPED_FROZEN.index]);
dropSumObject.put(DropStatus.DROPPED_HIGH.name(), dropSum[DropStatus.DROPPED_HIGH.index]);
dropSumObject.put(DropStatus.DROPPED_MIDDLE.name(), dropSum[DropStatus.DROPPED_MIDDLE.index]);
dropSumObject.put(DropStatus.DROPPED_NORMAL.name(), dropSum[DropStatus.DROPPED_NORMAL.index]);
dropSumObject.put(DropStatus.DROPPED_BEST.name(), dropSum[DropStatus.DROPPED_BEST.index]);
JSONObject resultObject = new JSONObject();
resultObject = DeviceUtil.getDeviceInfo(resultObject, plugin.getApplication());
resultObject.put(SharePluginInfo.ISSUE_SCENE, visibleScene);
resultObject.put(SharePluginInfo.ISSUE_DROP_LEVEL, dropLevelObject);
resultObject.put(SharePluginInfo.ISSUE_DROP_SUM, dropSumObject);
resultObject.put(SharePluginInfo.ISSUE_FPS, fps);
Issue issue = new Issue();
issue.setTag(SharePluginInfo.TAG_PLUGIN_FPS);
issue.setContent(resultObject);
plugin.onDetectIssue(issue);
} catch (JSONException e) {MatrixLog.e(TAG, "json error", e);
} finally {
sumFrame = 0;
sumDroppedFrames = 0;
sumFrameCost = 0;
}
}
}
我跟着这段代码,找一下 sumFrame 被调用的中央,看看都做了什么,上面能够疾速看,因为没有具体代码,起因是咱们曾经晓得了能够通过 Choreographer.FrameCallback 来注册监听,为了疾速验证 Matrix Trace 帧率的实现计划,咱们跳过细节,间接找到外围逻辑再贴代码。
找到了一个函数 collect,做了 ++ 操作,再往上找发现,FrameTracer 又一个外部类 FPSCollector
再往上,发现 doReplay 办法调用了 doReplayInner
持续后,发现了 IDoFrameListener 在调用 doReplay 函数
而且 FPSCollector 就是继承自 IDoFrameListener,再来看 IDoFrameListener
和咱们之前剖析的不太一样,并没有找到 Choreographer.FrameCallback 的影子,计算形式倒是差不多。我不信,我还要再往上找一下
看到这里,发现了 doFrame 函数,仿佛找到了 FrameCallback 的影子,却又不是。持续看
发现了 UIThreadMonitor 类,持续往上
发现在 init 函数中被调用,来该看代码了
LooperMonitor.register(new LooperMonitor.LooperDispatchListener() {
@Override
public boolean isValid() {return isAlive;}
@Override
public void dispatchStart() {super.dispatchStart();
UIThreadMonitor.this.dispatchBegin();}
@Override
public void dispatchEnd() {super.dispatchEnd();
UIThreadMonitor.this.dispatchEnd();}
});
LooperMonitor 是个什么鬼,为啥能感知帧率?看下它是个啥?
class LooperMonitor implements MessageQueue.IdleHandler
查问一下发现 MessageQueue.IdleHandler,它能够用来在线程闲暇的时候,指定一个操作,只有线程闲暇了,就能够执行它指定的操作,这跟咱们之前的计划是不是就不一样了,咱们可没有思考线程是否是闲暇,随时都在计算帧率,看到这里,我算是晓得了它基本没有用 FrameCallback,而是通过另一种形式来计算的,先不说是什么,咱们再跟踪一下 LooperDispatchListener
发现一个 LooperPrinter,它散发的,来看 LooperPrinter
class LooperPrinter implements Printer
// 打印?public interface Printer {
/**
* Write a line of text to the output. There is no need to terminate
* the given string with a newline.
*/
void println(String x);
}
看看这个 LooperPrinter 是怎么创建对象的,找到如下援用
具体看下代码
private synchronized void resetPrinter() {
Printer originPrinter = null;
try {if (!isReflectLoggingError) {originPrinter = ReflectUtils.get(looper.getClass(), "mLogging", looper);
if (originPrinter == printer && null != printer) {return;}
}
} catch (Exception e) {
isReflectLoggingError = true;
Log.e(TAG, "[resetPrinter] %s", e);
}
if (null != printer) {MatrixLog.w(TAG, "maybe thread:%s printer[%s] was replace other[%s]!",
looper.getThread().getName(), printer, originPrinter);
}
//setMessageLogging 用来记录 Looper.loop()中相干 log 信息,给他设置一个 printer,// 那岂不是打印的工作就交给了 LooperPrinter
looper.setMessageLogging(printer = new LooperPrinter(originPrinter));
if (null != originPrinter) {MatrixLog.i(TAG, "reset printer, originPrinter[%s] in %s", originPrinter, looper.getThread().getName());
}
}
我顺着这个 looper,找到了这个
对的就是主线程的 Looper,咱们都晓得主线程,始终负责 UI 的刷新工作,原来如此,它利用这 Looper 提供的日志机制,且思考到在线程闲暇时来解决数据,来监控帧率和其余。很不错的设计,值得借鉴学习。而我还发现了一个细节,其实它还是用到了 Choreographer 来计算帧率,且利用反射来获取字段信息如:
// 帧间隔时间
frameIntervalNanos= ReflectUtils.reflectObject(choreographer, "mFrameIntervalNanos", Constants.DEFAULT_FRAME_DURATION);
//vsync 信号承受
vsyncReceiver = ReflectUtils.reflectObject(choreographer, "mDisplayEventReceiver", null);
下面 doFrame 函数回调中的 frameTimeNanos 其实就是从 vsyncReceiver 中拿到的。
源码截图,看来计算帧率必定是离不开 Choreographer
那么问题又来了。
为什么 Looper 提供的日志机制能够计算帧率
你是不是跟我一样有这个疑难,我带你看看 Choreographer 的源码你就明确了,来
private static final ThreadLocal<Choreographer> sThreadInstance =
new ThreadLocal<Choreographer>() {
@Override
protected Choreographer initialValue() {Looper looper = Looper.myLooper();
if (looper == null) {throw new IllegalStateException("The current thread must have a looper!");
}
Choreographer choreographer = new Choreographer(looper, VSYNC_SOURCE_APP);
if (looper == Looper.getMainLooper()) {mMainInstance = choreographer;}
return choreographer;
}
};
从这段代码剖析咱们得出:
Choreographer 是线程公有的,因为 ThreadLocal 创立的变量只能被以后线程拜访,也就是说 一个线程对应一个 Choreographer,主线程的 Choreographer 就是 mMainInstance。再来看一段代码
private Choreographer(Looper looper, int vsyncSource) {
mLooper = looper;
mHandler = new FrameHandler(looper);
mDisplayEventReceiver = USE_VSYNC
? new FrameDisplayEventReceiver(looper, vsyncSource)
: null;
mLastFrameTimeNanos = Long.MIN_VALUE;
mFrameIntervalNanos = (long)(1000000000 / getRefreshRate());
mCallbackQueues = new CallbackQueue[CALLBACK_LAST + 1];
for (int i = 0; i <= CALLBACK_LAST; i++) {mCallbackQueues[i] = new CallbackQueue();}
// b/68769804: For low FPS experiments.
setFPSDivisor(SystemProperties.getInt(ThreadedRenderer.DEBUG_FPS_DIVISOR, 1));
}
这是 Choreographer 结构,这里其实咱们发现,Choreographer 是通过 Looper 创立的,它俩是一对一的关系,也就是在一个线程中有一个 Looper 也有一个 Choreographer,这外面两个比拟重要 FrameHandler,FrameDisplayEventReceiver 当初还不晓得他俩用来干嘛,往下看代码
private final class FrameHandler extends Handler {public FrameHandler(Looper looper) {super(looper);
}
@Override
public void handleMessage(Message msg) {switch (msg.what) {
case MSG_DO_FRAME:
doFrame(System.nanoTime(), 0);
break;
case MSG_DO_SCHEDULE_VSYNC:
doScheduleVsync();
break;
case MSG_DO_SCHEDULE_CALLBACK:
doScheduleCallback(msg.arg1);
break;
}
}
}
void doFrame(long frameTimeNanos, int frame) {
final long startNanos;
synchronized (mLock) {if (!mFrameScheduled) {return; // no work to do}
if (DEBUG_JANK && mDebugPrintNextFrameTimeDelta) {
mDebugPrintNextFrameTimeDelta = false;
Log.d(TAG, "Frame time delta:"
+ ((frameTimeNanos - mLastFrameTimeNanos) * 0.000001f) + "ms");
}
long intendedFrameTimeNanos = frameTimeNanos;
startNanos = System.nanoTime();
final long jitterNanos = startNanos - frameTimeNanos;
if (jitterNanos >= mFrameIntervalNanos) {
final long skippedFrames = jitterNanos / mFrameIntervalNanos;
if (skippedFrames >= SKIPPED_FRAME_WARNING_LIMIT) {
Log.i(TAG, "Skipped" + skippedFrames + "frames!"
+ "The application may be doing too much work on its main thread.");
}
final long lastFrameOffset = jitterNanos % mFrameIntervalNanos;
if (DEBUG_JANK) {Log.d(TAG, "Missed vsync by" + (jitterNanos * 0.000001f) + "ms"
+ "which is more than the frame interval of"
+ (mFrameIntervalNanos * 0.000001f) + "ms!"
+ "Skipping" + skippedFrames + "frames and setting frame"
+ "time to" + (lastFrameOffset * 0.000001f) + "ms in the past.");
}
frameTimeNanos = startNanos - lastFrameOffset;
}
if (frameTimeNanos < mLastFrameTimeNanos) {if (DEBUG_JANK) {
Log.d(TAG, "Frame time appears to be going backwards. May be due to a"
+ "previously skipped frame. Waiting for next vsync.");
}
scheduleVsyncLocked();
return;
}
if (mFPSDivisor > 1) {
long timeSinceVsync = frameTimeNanos - mLastFrameTimeNanos;
if (timeSinceVsync < (mFrameIntervalNanos * mFPSDivisor) && timeSinceVsync > 0) {scheduleVsyncLocked();
return;
}
}
mFrameInfo.setVsync(intendedFrameTimeNanos, frameTimeNanos);
mFrameScheduled = false;
mLastFrameTimeNanos = frameTimeNanos;
}
try {Trace.traceBegin(Trace.TRACE_TAG_VIEW, "Choreographer#doFrame");
AnimationUtils.lockAnimationClock(frameTimeNanos / TimeUtils.NANOS_PER_MS);
mFrameInfo.markInputHandlingStart();
doCallbacks(Choreographer.CALLBACK_INPUT, frameTimeNanos);
mFrameInfo.markAnimationsStart();
doCallbacks(Choreographer.CALLBACK_ANIMATION, frameTimeNanos);
mFrameInfo.markPerformTraversalsStart();
doCallbacks(Choreographer.CALLBACK_TRAVERSAL, frameTimeNanos);
doCallbacks(Choreographer.CALLBACK_COMMIT, frameTimeNanos);
} finally {AnimationUtils.unlockAnimationClock();
Trace.traceEnd(Trace.TRACE_TAG_VIEW);
}
if (DEBUG_FRAMES) {final long endNanos = System.nanoTime();
Log.d(TAG, "Frame" + frame + ": Finished, took"
+ (endNanos - startNanos) * 0.000001f + "ms, latency"
+ (startNanos - frameTimeNanos) * 0.000001f + "ms.");
}
}
通过代码咱们发现,FrameHandler 它接管音讯,而后解决音讯,调用 Choreographer 的 doFrame 函数,这个 doFrame 非 postFrameCallback 的 FrameCallback 的 doFrame,而通过代码的搜寻,我发现 FrameCallback 的 doFrame 就是通过这里触发的,在 doCallbacks 函数中。这个细节不带你们看了,咱们来看这个 FrameHandler 的音讯是谁触发的,接着看 FrameDisplayEventReceiver
private final class FrameDisplayEventReceiver extends DisplayEventReceiver
implements Runnable {
private boolean mHavePendingVsync;
private long mTimestampNanos;
private int mFrame;
public FrameDisplayEventReceiver(Looper looper, int vsyncSource) {super(looper, vsyncSource);
}
@Override
public void onVsync(long timestampNanos, int builtInDisplayId, int frame) {
// Ignore vsync from secondary display.
// This can be problematic because the call to scheduleVsync() is a one-shot.
// We need to ensure that we will still receive the vsync from the primary
// display which is the one we really care about. Ideally we should schedule
// vsync for a particular display.
// At this time Surface Flinger won't send us vsyncs for secondary displays
// but that could change in the future so let's log a message to help us remember
// that we need to fix this.
if (builtInDisplayId != SurfaceControl.BUILT_IN_DISPLAY_ID_MAIN) {
Log.d(TAG, "Received vsync from secondary display, but we don't support "+"this case yet. Choreographer needs a way to explicitly request "+"vsync for a specific display to ensure it doesn't lose track"
+ "of its scheduled vsync.");
scheduleVsync();
return;
}
// Post the vsync event to the Handler.
// The idea is to prevent incoming vsync events from completely starving
// the message queue. If there are no messages in the queue with timestamps
// earlier than the frame time, then the vsync event will be processed immediately.
// Otherwise, messages that predate the vsync event will be handled first.
long now = System.nanoTime();
if (timestampNanos > now) {Log.w(TAG, "Frame time is" + ((timestampNanos - now) * 0.000001f)
+ "ms in the future! Check that graphics HAL is generating vsync"
+ "timestamps using the correct timebase.");
timestampNanos = now;
}
if (mHavePendingVsync) {
Log.w(TAG, "Already have a pending vsync event. There should only be"
+ "one at a time.");
} else {mHavePendingVsync = true;}
mTimestampNanos = timestampNanos;
mFrame = frame;
Message msg = Message.obtain(mHandler, this);
msg.setAsynchronous(true);
mHandler.sendMessageAtTime(msg, timestampNanos / TimeUtils.NANOS_PER_MS);
}
@Override
public void run() {
mHavePendingVsync = false;
doFrame(mTimestampNanos, mFrame);
}
}
onVsync 检索源码发现这个函数是由 android.view 包下 DisplayEventReceiver 触发,DisplayEventReceiver 理论是在 C ++ 层初始化,并监听 Vsync 信号,其实是由 SurfaceFlinger 传递过去,所以这里我晓得了,FrameDisplayEventReceiver 用来接管 onVsync 信号,而后通过 mHandler 也就是下面的 FrameHandler 触发一次音讯的传递。然而你是不是有点狐疑不对劲,因为下面的 case MSG_DO_FRAME 才会触发 doFrame 函数,这里没有设置这样的音讯 mHandler.obtainMessage(_MSG_DO_FRAME_),这样就会触发对吧,但认真看 Message.obtain(mHandler, this),这里的 this 就是 FrameDisplayEventReceiver,而 FrameDisplayEventReceiver 实现 Runnable,那么就会导致 FrameHandler 在收到音讯后,执行 FrameDisplayEventReceiver 的 run 函数,而这个函数就是调用了 doFrame,那么就通了。
好了,能够总结下为什么能够了:
Choreographer 的 onVsync 音讯生产其实就是通过所在线程中的 Looper 中解决的,那么咱们监控主线程中的 looper 音讯,同样也能监控到帧率。就是这样的情理。
小结
- Main Looper 中设置 Printer 来做散发
- 散发起初计算帧率
- 通过 MessageQueue.IdleHandler 避开线程繁忙的工夫,期待闲时解决
大抵就这么多,如果你有什么新发现,或者我有不对的中央,欢送评论执教。