0

0

[小白入门]基于ERFNet车道线检测入门语义分割

P粉084495128

P粉084495128

发布时间:2025-07-28 10:31:20

|

469人浏览过

|

来源于php中文网

原创

本文介绍基于culane数据集和erfnet网络实现车道线检测的入门项目。先简述语义分割任务,即像素级分类,类似机器自动“抠图”并按类上色。接着介绍culane数据集,随后详解项目实操,包括数据处理、erfnet网络搭建、模型训练与推理,最后展示部署效果。

☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

[小白入门]基于erfnet车道线检测入门语义分割 - php中文网

一、项目背景

  • AiStudio是一个很好的学习平台,我相信无时无刻都有很多像我一样的小白出于对人工智能的兴趣,而汇聚在这里。这一次,我想做一个入门级的项目来和各位同学一起学习图像分割领域的基础任务——语义分割任务。
  • 本次项目将基于Culane车道线数据集,搭建ERFnet网络,来实现简单的车道线检测。

二、分割任务简介

简单来说,其实图像分割任务可以分为以下三大类:

  • 语义分割
  • 实例分割
  • 全景分割

在本项目中咱们将基于基础的语义分割任务进行展开,如果同学们对进阶的实例分割、全景分割感兴趣的话,在以后我也会有相关的项目分享,当然您也可以自行在aistudio上搜索,相信有很多优秀的开发者早已分享了他们的精选项目。

那么语义分割任务是什么呢?

[小白入门]基于ERFNet车道线检测入门语义分割 - php中文网        

总的来说,语义分割其实就是对一张图片中的物体按类别进行分割提取,本质是像素点级别上进行分类。

还是有点懵?那就再简单一点。

其实分割的概念和抠图有点像,只不过咱们抠图更多的时候是人为的手工操作,但是分割的话可以交给机器去进行实现,通俗的理解为抠图其实也并无不妥。

[小白入门]基于ERFNet车道线检测入门语义分割 - php中文网        

但值得注意的是!分割的结果并不是漂亮小姐姐的图像哦!我们语义分割模型输出的结果其实是类似图二的图像。并且当分割任务是对多个类别进行分割时,为了便于可视化,我们会对分割结果按照不同类别进行上色,因而上面的讲解中才得到了这样的结果。

[小白入门]基于ERFNet车道线检测入门语义分割 - php中文网        

“抠图”完成后,咱们按类别进行了上色。

那么接下来,就让咱们进入项目的实操部分吧,冲冲冲。

三、CULane车道线数据集简介

CULane是一个大规模且具有挑战性的车道线数据集,主要应用于行车道检测的学术研究。 它是由安装在六辆由北京不同驾驶员驾驶的不同车辆上的摄像机收集的。 收集了超过55小时的视频,并提取了133,235帧。 数据集分为88880张训练集图像,9675张验证集图像和34680张测试集图像。 测试集分为正常类别和8个具有挑战性的类别,分别对应于下面的9个示例。[小白入门]基于ERFNet车道线检测入门语义分割 - php中文网        

在每一帧数据图像中,作者都使用三次样条曲线手动注释行车道。 对于车道被车辆遮挡或看不见的情况,作者仍会根据上下文对车道进行注释,作者还希望算法能够区分道路上的障碍。 因此,对于在障碍物另一侧的车道没有进行标注。 在此数据集中,作者将注意力集中在四个车道标记的检测上,这在实际应用中最受关注,其他车道标记未标注。

光看介绍有点懵?那咱们找组图片深♂入一下?

[小白入门]基于ERFNet车道线检测入门语义分割 - php中文网        

观察Label图像不难发现,数据集对原图中的4条车道线进行了标注,其他东西通通当作背景,而我们这次项目想要达到的效果也是如此,输入一张图片,模型输出一张类似Label图像一样的车道线分割结果。

四、ERFnet网络结构

[小白入门]基于ERFNet车道线检测入门语义分割 - php中文网        

具体来讲, ERFnet是一种基于残差连接和可分离卷积的语义分割网络,旨在在不降低准确率的同时提高模型处理帧数的效率,可以很好的满足在自动驾驶中对于实时性的要求,且由于其轻量级的特性,在进行硬件部署时也具有很好的契合度,因而在自动驾驶及车道线检测领域具有不俗的参考意义。ERFnet网络结构中所用到的Non-bottleneck-1D残差连接块,对经典残差网络开山之作Resnet中提出的Non-bottleneck残差块作出了改进,将二维卷积拆分为了两个一维卷积,增加了非线性化的层数,以及在整体上减少参数数量。改进后的Non-bottleneck-1D残差块可以在保持相同的参数数量的情况下,扩大卷积的尺寸,增强感受的能力,很好的保证了ERFnet网络的精确度。从总体结构来看,ERFnet沿用了U-net中经典的Encoder-Decoder结构,通过Encoder编码器进行下采样(提取特征),通过Decoder解码器进行上采样(还原图像),并在其中穿插了特征融合和空洞卷积,因而在模型轻量的情况下还能保证不俗的精度。

银河易创
银河易创

一站式AIGC创作平台,集成GPT-3.5、GPT-4、文心一言等对话模型、Midjourney、DallE等绘画工具、AI音乐、AI视频和AI PPT等功能!

下载

核心模块

[小白入门]基于ERFNet车道线检测入门语义分割 - php中文网 [小白入门]基于ERFNet车道线检测入门语义分割 - php中文网        

具体实现细节可参考郑佬的项目

ERFNet:用于实时语义分割的高效残差分解卷积神经网络

当然啦,学习网络结构对于刚接触的同学来说可能比较吃力,学习是一个持续的过程,现在您可能一知半解,但随着知识的积累,我相信很快您就能理解其中的奥妙,但在此之前,难道因为不熟悉网络结构,咱们就不能进行下去了嘛? 当然不是~!

以下是我的一些简单的理解:

1. 神经网络更多讲究搭建端到端网络,什么意思呢?其实就是您给网络一个输入,然后网络给您一个输出,而数据在网络中的运算过程对一般用户来说其实是不可见过程,说这个有什么意义呢?其实是这样的,现在开源的网络结构有很多,虽然咱们可能不太清楚其中的一些高深的原理,但其实咱们可以站在用户的角度来看待这个问题,虽然咱们不懂细节,但我们会用就行!

2. 想要“走捷径”快速的使用一个陌生的网络结构,我们更多的是需要关注网络的输入和输出部分的构建,对于不同的任务,输出部分的构建往往也是不一样的。举个简单的例子,在图像分类任务中,网络的结尾会接全连接层,最后网络的输出是多个类别对应的概率值,但是在语义分割的网络中,咱们输出结果是与原图大小相等的多通道图像!其中通道数等于类别数!这是分割任务的一大特点,大家千万不要弄混了。所以结论是,对于咱们这个车道线检测项目,咱们输入的结果是形如[1,3,576,1640]的多维图像矩阵(图像是3通道图像所以通道数为3),咱们的输出结果是形如[1,2,576,1640]的多维图像矩阵,其中1是图像数量(咱们输入网络1张图像,所以输出也为1),其中2为类别数(咱们这个车道线检测任务,主要是2分类,一类是背景一类是车道线,所以这里输出结果通道数为2(分割任务的结果分类类别数等于通道数)),其中(576,1640)是输出图像的大小尺寸这与输入保持一致。

五、代码实操部分

该走的流程还是要走滴,该导的库还是要导滴。

In [1]
import randomimport paddle 
import paddle.nn as nnfrom paddle.nn import functional as Ffrom paddle.io import Datasetfrom paddle.vision.transforms import transforms as Timport osimport ioimport numpy as npimport cv2 as cvimport matplotlib.pyplot as pltfrom PIL import Image as PilImage
%matplotlib inline
       
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/__init__.py:107: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import MutableMapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/rcsetup.py:20: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Iterable, Mapping
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/colors.py:53: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Sized
       
In [ ]
# 第一步解压数据集!unzip -oq /home/aistudio/data/data112899/CULane.zip  #第一次运行记得解压一下数据集(只运行一次就可以啦)
   
In [2]
# 第二步 数据集划分,在这里咱们按8:1:1的比例划分训练集、验证集、测试集path_origin = 'CULane/JPEGImages/'path_seg = 'CULane/Annotations/'pic_dir = os.listdir(path_origin)

f_train = open('CULane/train_list.txt', 'w')
f_val = open('CULane/val_list.txt', 'w')
f_test = open('CULane/test_list.txt','w')for i in range(len(pic_dir)):    if i % 9 == 0:
        f_val.write(path_origin + pic_dir[i] + '\t' + path_seg + pic_dir[i].split('.')[0] + '.png' + '\n')    elif i % 10 == 0 :
        f_test.write(path_origin + pic_dir[i] + '\t' + path_seg + pic_dir[i].split('.')[0] + '.png' + '\n')    else:
        f_train.write(path_origin + pic_dir[i] + '\t' + path_seg + pic_dir[i].split('.')[0] + '.png' + '\n')

f_train.close()
f_val.close()
f_test.close()
   
In [3]
# 第三步 测试一下生成的train_list.txt中的路径是否设置正确,根据路径读取图片进行展示。with open('CULane/train_list.txt', 'r') as f:
    i = 0
    for line in f.readlines():
        image_path, label_path = line.strip().split('\t')
        image = np.array(PilImage.open(image_path))
        label = np.array(PilImage.open(label_path))        if i > 2:            break
        # 进行图片的展示
        plt.figure()

        plt.subplot(1,2,1), 
        plt.title('Train Image')
        plt.imshow(image.astype('uint8'))
        plt.axis('off')

        plt.subplot(1,2,2), 
        plt.title('Label')
        plt.imshow(label.astype('uint8'), cmap='gray')
        plt.axis('off')

        plt.show()
        i = i + 1
       
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:2349: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  if isinstance(obj, collections.Iterator):
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/cbook/__init__.py:2366: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  return list(data) if isinstance(data, collections.MappingView) else data
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:425: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead
  a_min = np.asscalar(a_min.astype(scaled_dtype))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/image.py:426: DeprecationWarning: np.asscalar(a) is deprecated since NumPy v1.16, use a.item() instead
  a_max = np.asscalar(a_max.astype(scaled_dtype))
       
<Figure size 640x480 with 2 Axes>
               
<Figure size 640x480 with 2 Axes>
               
<Figure size 640x480 with 2 Axes>
               
In [4]
# 第四步 构建数据读取器IMAGE_SIZE = (576,1640)class MyDateset(Dataset):
    """
    数据集定义
    """
    def __init__(self, mode='train'):
        """
        构造函数
        """
        self.image_size = IMAGE_SIZE
        self.mode = mode.lower()
        
        self.train_images = []
        self.label_images = []        with open('CULane/{}_list.txt'.format(self.mode), 'r') as f:            for line in f.readlines():
                image, label = line.strip().split('\t')
                self.train_images.append(image)
                self.label_images.append(label)        
    def _load_img(self, path, color_mode='rgb', transforms=[]):
        """
        统一的图像处理接口封装,用于规整图像大小和通道
        """
        with open(path, 'rb') as f:
            img = PilImage.open(io.BytesIO(f.read()))            if color_mode == 'grayscale':                if img.mode not in ('L', 'I;16', 'I'):
                    img = img.convert('L')            elif color_mode == 'rgba':                if img.mode != 'RGBA':
                    img = img.convert('RGBA')            elif color_mode == 'rgb':                if img.mode != 'RGB':
                    img = img.convert('RGB')            else:                raise ValueError('color_mode must be "grayscale", "rgb", or "rgba"')            
            return T.Compose([T.Resize(self.image_size)] + transforms)(img)    def __getitem__(self, idx):
        """
        返回 image, label
        """
        train_image = self._load_img(self.train_images[idx], 
                                     transforms=[
                                         T.Transpose(), 
                                         T.Normalize(mean=127.5, std=127.5)
                                     ]) # 加载原始图像
        label_image = self._load_img(self.label_images[idx], 
                                     color_mode='grayscale',
                                     transforms=[T.Grayscale()]) # 加载Label图像
    
        # 返回image, label
        train_image = np.array(train_image, dtype='float32')
        label_image = np.array(label_image, dtype='int64')        return train_image, label_image        
    def __len__(self):
        """
        返回数据集总数
        """
        return len(self.train_images)
   
In [5]
# 第五步 模型网络搭建,这里采用了ERFnet网络,当然啦实在看不懂也可以先暂时跳过,您只需要知道输入和输出的结果是什么就行。class non_bottleneck_1d(paddle.nn.Layer):
    def __init__(self, chann, dropprob, dilated):
        super().__init__()
        self.conv3x1_1 = paddle.nn.Conv2D(in_channels=chann, out_channels=chann, kernel_size=(3, 1), stride=1, padding=(1, 0), bias_attr=True)
        self.conv1x3_1 = paddle.nn.Conv2D(in_channels=chann, out_channels=chann, kernel_size=(1, 3), stride=1, padding=(0, 1), bias_attr=True)
        self.bn1 = paddle.nn.BatchNorm(chann, epsilon=1e-03)
        self.conv3x1_2 = paddle.nn.Conv2D(in_channels=chann, out_channels=chann, kernel_size=(3, 1), stride=1, padding=(1 * dilated, 0), bias_attr=True,
                                              dilation=(dilated, 1))
        self.conv1x3_2 = paddle.nn.Conv2D(in_channels=chann, out_channels=chann, kernel_size=(1, 3), stride=1, padding=(0, 1 * dilated), bias_attr=True,
                                              dilation=(1, dilated))
        self.bn2 = paddle.nn.BatchNorm(chann, epsilon=1e-03)
        self.dropout = paddle.nn.Dropout(dropprob)
        self.p = dropprob    def forward(self, input):
        output = self.conv3x1_1(input)
        output = paddle.nn.functional.relu(output)
        output = self.conv1x3_1(output)
        output = self.bn1(output)
        output = paddle.nn.functional.relu(output)
        output = self.conv3x1_2(output)
        output = paddle.nn.functional.relu(output)
        output = self.conv1x3_2(output)
        output = self.bn2(output)        if self.p != 0:
            output = self.dropout(output)        return paddle.nn.functional.relu(output + input)import paddleclass non_bottleneck_1d(paddle.nn.Layer):
    def __init__(self, chann, dropprob, dilated):
        super().__init__()
        self.conv3x1_1 = paddle.nn.Conv2D(in_channels=chann, out_channels=chann, kernel_size=(3, 1), stride=1, padding=(1, 0), bias_attr=True)
        self.conv1x3_1 = paddle.nn.Conv2D(in_channels=chann, out_channels=chann, kernel_size=(1, 3), stride=1, padding=(0, 1), bias_attr=True)
        self.bn1 = paddle.nn.BatchNorm(chann, epsilon=1e-03)
        self.conv3x1_2 = paddle.nn.Conv2D(in_channels=chann, out_channels=chann, kernel_size=(3, 1), stride=1, padding=(1 * dilated, 0), bias_attr=True,
                                              dilation=(dilated, 1))
        self.conv1x3_2 = paddle.nn.Conv2D(in_channels=chann, out_channels=chann, kernel_size=(1, 3), stride=1, padding=(0, 1 * dilated), bias_attr=True,
                                              dilation=(1, dilated))
        self.bn2 = paddle.nn.BatchNorm(chann, epsilon=1e-03)
        self.dropout = paddle.nn.Dropout(dropprob)
        self.p = dropprob    def forward(self, input):
        output = self.conv3x1_1(input)
        output = paddle.nn.functional.relu(output)
        output = self.conv1x3_1(output)
        output = self.bn1(output)
        output = paddle.nn.functional.relu(output)
        output = self.conv3x1_2(output)
        output = paddle.nn.functional.relu(output)
        output = self.conv1x3_2(output)
        output = self.bn2(output)        if self.p != 0:
            output = self.dropout(output)        return paddle.nn.functional.relu(output + input)class DownsamplerBlock(paddle.nn.Layer):
    def __init__(self, ninput, noutput):
        super().__init__()
        self.conv = paddle.nn.Conv2D(in_channels=ninput, out_channels=noutput-ninput, kernel_size=3,
                                     stride=2, padding=1, bias_attr=True)
        self.pool = paddle.nn.MaxPool2D(kernel_size=2, stride=2)
        self.bn = paddle.nn.BatchNorm(noutput, epsilon=1e-3)    def forward(self, input):
        output = paddle.concat(x=[self.conv(input), self.pool(input)], axis=1)
        output = self.bn(output)        return paddle.nn.functional.relu(output)class Encoder(paddle.nn.Layer):
    def __init__(self, num_classes):
        super().__init__()
        self.initial_block = DownsamplerBlock(3, 16)
        self.layers = paddle.nn.LayerList()
        self.layers.append(DownsamplerBlock(16, 64))        for x in range(0, 5):  # 5 times
            self.layers.append(non_bottleneck_1d(64, 1, 1))
        self.layers.append(DownsamplerBlock(64, 128))        for x in range(0, 2):  # 2 times
            self.layers.append(non_bottleneck_1d(128, 1, 2))
            self.layers.append(non_bottleneck_1d(128, 1, 4))
            self.layers.append(non_bottleneck_1d(128, 1, 8))
            self.layers.append(non_bottleneck_1d(128, 1, 16))
        self.output_conv = paddle.nn.Conv2D(in_channels=128, out_channels=num_classes, kernel_size=1, stride=1, padding=0, bias_attr=True)    def forward(self, input, predict=False):
        output = self.initial_block(input)        for layer in self.layers:
            output = layer(output)        if predict:
            output = self.output_conv(output)        return outputclass UpsamplerBlock(paddle.nn.Layer):
    def __init__(self, ninput, noutput, output_size=[16, 16]):
        super().__init__()
        self.conv = paddle.nn.Conv2DTranspose(ninput, noutput, kernel_size=3, stride=2, padding=1, bias_attr=True)
        self.bn = paddle.nn.BatchNorm(noutput, epsilon=1e-3)
        self.output_size = output_size    def forward(self, input):
        output = self.conv(input, output_size=self.output_size)
        output = self.bn(output)        return paddle.nn.functional.relu(output)class Decoder(paddle.nn.Layer):
    def __init__(self, num_classes, raw_size=[576, 1640]):
        super().__init__()
        self.layers = paddle.nn.LayerList()
        self.raw_size = raw_size
        self.layers.append(UpsamplerBlock(128, 64, output_size=[raw_size[0] // 4, raw_size[1] // 4]))
        self.layers.append(non_bottleneck_1d(64, 0, 1))
        self.layers.append(non_bottleneck_1d(64, 0, 1))
        self.layers.append(UpsamplerBlock(64, 16, output_size=[raw_size[0] // 2, raw_size[1] // 2]))
        self.layers.append(non_bottleneck_1d(16, 0, 1))
        self.layers.append(non_bottleneck_1d(16, 0, 1))
        self.output_conv = paddle.nn.Conv2DTranspose(16, num_classes, kernel_size=2, stride=2, padding=0, bias_attr=True)    def forward(self, input):
        output = input
        for layer in self.layers:
            output = layer(output)
        output = self.output_conv(output, output_size=[self.raw_size[0], self.raw_size[1]])        return outputclass ERFNet(paddle.nn.Layer):
    def __init__(self, num_classes, raw_size=[576, 1640]):
        super().__init__()
        self.encoder = Encoder(num_classes)
        self.decoder = Decoder(num_classes, raw_size=raw_size)    def forward(self, input):
        output = self.encoder(input)        return self.decoder.forward(output)
   
In [6]
# 第六步 查看一下网络结构,测试一下能不能跑通。paddle.summary(ERFNet(2),(1,3,576,1640))
       
W1122 09:06:07.775314  1189 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 12.0, Runtime API Version: 10.1
W1122 09:06:07.780476  1189 device_context.cc:465] device: 0, cuDNN Version: 7.6.
       
--------------------------------------------------------------------------------
    Layer (type)         Input Shape          Output Shape         Param #    
================================================================================
      Conv2D-1       [[1, 3, 576, 1640]]   [1, 13, 288, 820]         364      
    MaxPool2D-1      [[1, 3, 576, 1640]]    [1, 3, 288, 820]          0       
    BatchNorm-1      [[1, 16, 288, 820]]   [1, 16, 288, 820]         64       
 DownsamplerBlock-1  [[1, 3, 576, 1640]]   [1, 16, 288, 820]          0       
      Conv2D-2       [[1, 16, 288, 820]]   [1, 48, 144, 410]        6,960     
    MaxPool2D-2      [[1, 16, 288, 820]]   [1, 16, 144, 410]          0       
    BatchNorm-2      [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
 DownsamplerBlock-2  [[1, 16, 288, 820]]   [1, 64, 144, 410]          0       
      Conv2D-3       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
      Conv2D-4       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-3      [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
      Conv2D-5       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
      Conv2D-6       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-4      [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
     Dropout-1       [[1, 64, 144, 410]]   [1, 64, 144, 410]          0       
non_bottleneck_1d-1  [[1, 64, 144, 410]]   [1, 64, 144, 410]          0       
      Conv2D-7       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
      Conv2D-8       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-5      [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
      Conv2D-9       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
     Conv2D-10       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-6      [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
     Dropout-2       [[1, 64, 144, 410]]   [1, 64, 144, 410]          0       
non_bottleneck_1d-2  [[1, 64, 144, 410]]   [1, 64, 144, 410]          0       
     Conv2D-11       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
     Conv2D-12       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-7      [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
     Conv2D-13       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
     Conv2D-14       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-8      [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
     Dropout-3       [[1, 64, 144, 410]]   [1, 64, 144, 410]          0       
non_bottleneck_1d-3  [[1, 64, 144, 410]]   [1, 64, 144, 410]          0       
     Conv2D-15       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
     Conv2D-16       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-9      [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
     Conv2D-17       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
     Conv2D-18       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-10     [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
     Dropout-4       [[1, 64, 144, 410]]   [1, 64, 144, 410]          0       
non_bottleneck_1d-4  [[1, 64, 144, 410]]   [1, 64, 144, 410]          0       
     Conv2D-19       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
     Conv2D-20       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-11     [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
     Conv2D-21       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
     Conv2D-22       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-12     [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
     Dropout-5       [[1, 64, 144, 410]]   [1, 64, 144, 410]          0       
non_bottleneck_1d-5  [[1, 64, 144, 410]]   [1, 64, 144, 410]          0       
     Conv2D-23       [[1, 64, 144, 410]]    [1, 64, 72, 205]       36,928     
    MaxPool2D-3      [[1, 64, 144, 410]]    [1, 64, 72, 205]          0       
    BatchNorm-13     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
 DownsamplerBlock-3  [[1, 64, 144, 410]]   [1, 128, 72, 205]          0       
     Conv2D-24       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-25       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-14     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Conv2D-26       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-27       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-15     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Dropout-6       [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
non_bottleneck_1d-6  [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
     Conv2D-28       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-29       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-16     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Conv2D-30       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-31       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-17     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Dropout-7       [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
non_bottleneck_1d-7  [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
     Conv2D-32       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-33       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-18     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Conv2D-34       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-35       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-19     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Dropout-8       [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
non_bottleneck_1d-8  [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
     Conv2D-36       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-37       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-20     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Conv2D-38       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-39       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-21     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Dropout-9       [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
non_bottleneck_1d-9  [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
     Conv2D-40       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-41       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-22     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Conv2D-42       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-43       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-23     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Dropout-10      [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
non_bottleneck_1d-10 [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
     Conv2D-44       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-45       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-24     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Conv2D-46       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-47       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-25     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Dropout-11      [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
non_bottleneck_1d-11 [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
     Conv2D-48       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-49       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-26     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Conv2D-50       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-51       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-27     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Dropout-12      [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
non_bottleneck_1d-12 [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
     Conv2D-52       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-53       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-28     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Conv2D-54       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
     Conv2D-55       [[1, 128, 72, 205]]   [1, 128, 72, 205]       49,280     
    BatchNorm-29     [[1, 128, 72, 205]]   [1, 128, 72, 205]         512      
     Dropout-13      [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
non_bottleneck_1d-13 [[1, 128, 72, 205]]   [1, 128, 72, 205]          0       
     Encoder-1       [[1, 3, 576, 1640]]   [1, 128, 72, 205]          0       
 Conv2DTranspose-1   [[1, 128, 72, 205]]   [1, 64, 144, 410]       73,792     
    BatchNorm-30     [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
  UpsamplerBlock-1   [[1, 128, 72, 205]]   [1, 64, 144, 410]          0       
     Conv2D-57       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
     Conv2D-58       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-31     [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
     Conv2D-59       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
     Conv2D-60       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-32     [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
non_bottleneck_1d-14 [[1, 64, 144, 410]]   [1, 64, 144, 410]          0       
     Conv2D-61       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
     Conv2D-62       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-33     [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
     Conv2D-63       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
     Conv2D-64       [[1, 64, 144, 410]]   [1, 64, 144, 410]       12,352     
    BatchNorm-34     [[1, 64, 144, 410]]   [1, 64, 144, 410]         256      
non_bottleneck_1d-15 [[1, 64, 144, 410]]   [1, 64, 144, 410]          0       
 Conv2DTranspose-2   [[1, 64, 144, 410]]   [1, 16, 288, 820]        9,232     
    BatchNorm-35     [[1, 16, 288, 820]]   [1, 16, 288, 820]         64       
  UpsamplerBlock-2   [[1, 64, 144, 410]]   [1, 16, 288, 820]          0       
     Conv2D-65       [[1, 16, 288, 820]]   [1, 16, 288, 820]         784      
     Conv2D-66       [[1, 16, 288, 820]]   [1, 16, 288, 820]         784      
    BatchNorm-36     [[1, 16, 288, 820]]   [1, 16, 288, 820]         64       
     Conv2D-67       [[1, 16, 288, 820]]   [1, 16, 288, 820]         784      
     Conv2D-68       [[1, 16, 288, 820]]   [1, 16, 288, 820]         784      
    BatchNorm-37     [[1, 16, 288, 820]]   [1, 16, 288, 820]         64       
non_bottleneck_1d-16 [[1, 16, 288, 820]]   [1, 16, 288, 820]          0       
     Conv2D-69       [[1, 16, 288, 820]]   [1, 16, 288, 820]         784      
     Conv2D-70       [[1, 16, 288, 820]]   [1, 16, 288, 820]         784      
    BatchNorm-38     [[1, 16, 288, 820]]   [1, 16, 288, 820]         64       
     Conv2D-71       [[1, 16, 288, 820]]   [1, 16, 288, 820]         784      
     Conv2D-72       [[1, 16, 288, 820]]   [1, 16, 288, 820]         784      
    BatchNorm-39     [[1, 16, 288, 820]]   [1, 16, 288, 820]         64       
non_bottleneck_1d-17 [[1, 16, 288, 820]]   [1, 16, 288, 820]          0       
 Conv2DTranspose-3   [[1, 16, 288, 820]]   [1, 2, 576, 1640]         130      
================================================================================
Total params: 2,069,678
Trainable params: 2,056,494
Non-trainable params: 13,184
--------------------------------------------------------------------------------
Input size (MB): 10.81
Forward/backward pass size (MB): 3300.82
Params size (MB): 7.90
Estimated Total Size (MB): 3319.53
--------------------------------------------------------------------------------
       
{'total_params': 2069678, 'trainable_params': 2056494}
               
In [7]
# 第七步 实例化训练集、验证集、测试集train_dataset =  MyDateset(mode='train') # 训练数据集val_dataset =  MyDateset(mode='val') # 验证数据集test_dataset =  MyDateset(mode='test') # 测试数据集train_dataloader = paddle.io.DataLoader(
    train_dataset,
    batch_size=8,
    shuffle=True,
    drop_last=False)

val_dataloader = paddle.io.DataLoader(
    val_dataset,
    batch_size=1,
    shuffle=True,
    drop_last=False)

test_dataloader = paddle.io.DataLoader(
    test_dataset,
    batch_size=1,
    shuffle=True,
    drop_last=False)
   
In [8]
# 配置模型、loss函数、优化器model = ERFNet(num_classes=2)
model.train()
loss_fn = paddle.nn.CrossEntropyLoss(axis=1)
max_epoch=1          # 这里为了方便演示代码 设置了epoch数为1 要复现项目效果 请把epoch数设置为50scheduler = paddle.optimizer.lr.CosineAnnealingDecay(learning_rate=0.001, T_max=max_epoch)
opt = paddle.optimizer.Adam(learning_rate=scheduler, parameters=model.parameters())# 可用于加载之前训练的模型文件# model_state_dict = paddle.load("save_model/epoch_3.pdparams")# opt_state_dict = paddle.load("save_model/epoch_3.pdopt")# model.set_state_dict(model_state_dict)# opt.set_state_dict(opt_state_dict)
   
In [9]
# 模型训练并记录os.environ['CUDA_VISIBLE_DEVICES'] = '0'f_log = open('log.txt', 'a')for epoch in range(0,max_epoch):    for step, data in enumerate(train_dataloader):
        img, label = data
        pre = model(img)
        loss = loss_fn(pre, label)
        predicts = paddle.argmax(pre, axis=1)        # 计算miou
        miou,wrong,correct=paddle.fluid.layers.mean_iou(predicts,label,2)
        loss.backward()
        opt.step()
        opt.clear_gradients()        if step % 100 == 0 and step!=0:
            temp ="epoch: {}, step : {}, loss is: {}, miou is: {}".format(epoch, step, loss.numpy(),miou.numpy())            print(temp)
            f_log.write(temp+'\n')
    paddle.save(model.state_dict(),"save_model/epoch_{}.pdparams".format(epoch))
    paddle.save(opt.state_dict(),"save_model/epoch_{}.pdopt".format(epoch))
f_log.close()
       
epoch: 0, step : 100, loss is: [0.13927686], miou is: [0.4847293]
epoch: 0, step : 200, loss is: [0.13330099], miou is: [0.48230854]
epoch: 0, step : 300, loss is: [0.10703911], miou is: [0.4881684]
epoch: 0, step : 400, loss is: [0.08439594], miou is: [0.49422544]
epoch: 0, step : 500, loss is: [0.07835685], miou is: [0.49277282]
epoch: 0, step : 600, loss is: [0.13210864], miou is: [0.48121905]
epoch: 0, step : 700, loss is: [0.09453978], miou is: [0.4880681]
epoch: 0, step : 800, loss is: [0.09746557], miou is: [0.48614118]
epoch: 0, step : 900, loss is: [0.12714268], miou is: [0.4805257]
epoch: 0, step : 1000, loss is: [0.10404535], miou is: [0.48679313]
epoch: 0, step : 1100, loss is: [0.12627521], miou is: [0.4809]
epoch: 0, step : 1200, loss is: [0.11702421], miou is: [0.48320922]
epoch: 0, step : 1300, loss is: [0.0946762], miou is: [0.48671177]
epoch: 0, step : 1400, loss is: [0.11747885], miou is: [0.48124124674]
epoch: 0, step : 1500, loss is: [0.10730279], miou is: [0.48404324]
epoch: 0, step : 1600, loss is: [0.13672356], miou is: [0.4818533]
epoch: 0, step : 1700, loss is: [0.12087876], miou is: [0.48356727]
epoch: 0, step : 1800, loss is: [0.10689122], miou is: [0.48393604]
epoch: 0, step : 1900, loss is: [0.09504438], miou is: [0.4867893]
epoch: 0, step : 2000, loss is: [0.11099294], miou is: [0.48421988]
epoch: 0, step : 2100, loss is: [0.10703284], miou is: [0.48620635]
epoch: 0, step : 2200, loss is: [0.10188617], miou is: [0.48697066]
epoch: 0, step : 2300, loss is: [0.07738338], miou is: [0.4924985]
epoch: 0, step : 2400, loss is: [0.06830326], miou is: [0.5553771]
epoch: 0, step : 2500, loss is: [0.11509577], miou is: [0.56002903]
epoch: 0, step : 2600, loss is: [0.09727462], miou is: [0.58344084]
epoch: 0, step : 2700, loss is: [0.09854273], miou is: [0.563583]
epoch: 0, step : 2800, loss is: [0.09234307], miou is: [0.58188295]
epoch: 0, step : 2900, loss is: [0.0960774], miou is: [0.56114256]
epoch: 0, step : 3000, loss is: [0.08726992], miou is: [0.55743444]
epoch: 0, step : 3100, loss is: [0.10137159], miou is: [0.5864695]
epoch: 0, step : 3200, loss is: [0.10453519], miou is: [0.5808536]
epoch: 0, step : 3300, loss is: [0.11421765], miou is: [0.5466621]
epoch: 0, step : 3400, loss is: [0.11816304], miou is: [0.54667276]
epoch: 0, step : 3500, loss is: [0.08592777], miou is: [0.5695046]
epoch: 0, step : 3600, loss is: [0.10629157], miou is: [0.5724659]
epoch: 0, step : 3700, loss is: [0.08172587], miou is: [0.525596]
epoch: 0, step : 3800, loss is: [0.06315774], miou is: [0.57792294]
epoch: 0, step : 3900, loss is: [0.10147866], miou is: [0.5600542]
epoch: 0, step : 4000, loss is: [0.08548972], miou is: [0.6008725]
epoch: 0, step : 4100, loss is: [0.07798672], miou is: [0.62052643]
epoch: 0, step : 4200, loss is: [0.08380488], miou is: [0.60859656]
epoch: 0, step : 4300, loss is: [0.10450178], miou is: [0.591514]
epoch: 0, step : 4400, loss is: [0.08817854], miou is: [0.59015864]
epoch: 0, step : 4500, loss is: [0.10661422], miou is: [0.5781183]
epoch: 0, step : 4600, loss is: [0.09698336], miou is: [0.5511778]
epoch: 0, step : 4700, loss is: [0.09957878], miou is: [0.61045104]
epoch: 0, step : 4800, loss is: [0.10301865], miou is: [0.5733046]
epoch: 0, step : 4900, loss is: [0.08925698], miou is: [0.5949598]
epoch: 0, step : 5000, loss is: [0.08743063], miou is: [0.59802926]
epoch: 0, step : 5100, loss is: [0.06179001], miou is: [0.6367101]
epoch: 0, step : 5200, loss is: [0.10025749], miou is: [0.58668095]
epoch: 0, step : 5300, loss is: [0.09665603], miou is: [0.57429576]
epoch: 0, step : 5400, loss is: [0.12412474185], miou is: [0.54220945]
epoch: 0, step : 5500, loss is: [0.06253532], miou is: [0.6440188]
epoch: 0, step : 5600, loss is: [0.09304313], miou is: [0.5715176]
epoch: 0, step : 5700, loss is: [0.09525367], miou is: [0.5625985]
epoch: 0, step : 5800, loss is: [0.08387703], miou is: [0.5741054]
epoch: 0, step : 5900, loss is: [0.08723761], miou is: [0.59803224]
epoch: 0, step : 6000, loss is: [0.09758568], miou is: [0.6223379]
epoch: 0, step : 6100, loss is: [0.11697459], miou is: [0.57924354]
epoch: 0, step : 6200, loss is: [0.100724], miou is: [0.5860153]
epoch: 0, step : 6300, loss is: [0.08684465], miou is: [0.61564314]
epoch: 0, step : 6400, loss is: [0.09025575], miou is: [0.62548506]
epoch: 0, step : 6500, loss is: [0.05869177], miou is: [0.6868839]
epoch: 0, step : 6600, loss is: [0.08888834], miou is: [0.6237665]
epoch: 0, step : 6700, loss is: [0.06414994], miou is: [0.59068704]
epoch: 0, step : 6800, loss is: [0.09091921], miou is: [0.5837776]
epoch: 0, step : 6900, loss is: [0.0872962], miou is: [0.5957216]
epoch: 0, step : 7000, loss is: [0.10638998], miou is: [0.56399274]
epoch: 0, step : 7100, loss is: [0.09597596], miou is: [0.5542962]
epoch: 0, step : 7200, loss is: [0.0508446], miou is: [0.57402736]
epoch: 0, step : 7300, loss is: [0.08191186], miou is: [0.6318141]
epoch: 0, step : 7400, loss is: [0.08166245], miou is: [0.59082395]
epoch: 0, step : 7500, loss is: [0.09494389], miou is: [0.61808616]
epoch: 0, step : 7600, loss is: [0.08884145], miou is: [0.5611312]
epoch: 0, step : 7700, loss is: [0.10616488], miou is: [0.5547414]
epoch: 0, step : 7800, loss is: [0.09759728], miou is: [0.62624496]
epoch: 0, step : 7900, loss is: [0.08903372], miou is: [0.6205214]
epoch: 0, step : 8000, loss is: [0.10339966], miou is: [0.5819689]
epoch: 0, step : 8100, loss is: [0.10582763], miou is: [0.55487573]
epoch: 0, step : 8200, loss is: [0.08446056], miou is: [0.62681633]
epoch: 0, step : 8300, loss is: [0.0832355], miou is: [0.6156939]
epoch: 0, step : 8400, loss is: [0.09023587], miou is: [0.6016093]
epoch: 0, step : 8500, loss is: [0.08267266], miou is: [0.59566665]
epoch: 0, step : 8600, loss is: [0.10186594], miou is: [0.56223583]
epoch: 0, step : 8700, loss is: [0.0877626], miou is: [0.63271546]
epoch: 0, step : 8800, loss is: [0.11042956], miou is: [0.580356]
       
In [10]
# 模型训练完后,咱们可以用动态图的形式保存,这样一来的话,下次加载模型就不需要重新加载网络结构啦from paddle.static import InputSpec
path = "./export_model/MyERFnet"paddle.jit.save(
    layer=model,
    path=path,
    input_spec=[InputSpec(shape=[1,3,576,1640])])
       
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  return (isinstance(seq, collections.Sequence) and
       

六、模型推理部分

In [13]
# 加载模型path = "./export_model/MyERFnet_b"loaded_layer = paddle.jit.load(path)
loaded_layer.eval()
   
In [14]
for i in range(0,5):
    img = cv.imread("/home/aistudio/CULane/JPEGImages/{}.jpg".format(i))
    b, g, r = cv.split(img)
    img = cv.merge([r, g, b])
    orign = img
    orign = cv.resize(orign,(1640,576))
    
    transforms =T.Compose([
        T.Resize((576,1640)),
        T.Transpose(), 
        T.Normalize(mean=127.5, std=127.5)
                ])
    img = transforms(img)
    img = paddle.to_tensor(img)
    img = paddle.unsqueeze(img, axis=0)

    result = loaded_layer(img)
    result = paddle.squeeze(result, axis=0)
    result = paddle.transpose(result, perm=[1, 2, 0])
    result = np.array(result)
    result = np.argmax(result,-1)

    plt.figure()
    plt.subplot(1,2,1), 
    plt.title('img')
    plt.imshow(orign.astype('uint8'))
    plt.axis('off')

    plt.subplot(1,2,2), 
    plt.title('predict')
    plt.imshow(result.astype('uint8'), cmap='gray')
    plt.axis('off')

    plt.show()
       
<Figure size 640x480 with 2 Axes>
               
<Figure size 640x480 with 2 Axes>
               
<Figure size 640x480 with 2 Axes>
               
<Figure size 640x480 with 2 Axes>
               
<Figure size 640x480 with 2 Axes>
               

七、本地部署效果演示

   

热门AI工具

更多
DeepSeek
DeepSeek

幻方量化公司旗下的开源大模型平台

豆包大模型
豆包大模型

字节跳动自主研发的一系列大型语言模型

通义千问
通义千问

阿里巴巴推出的全能AI助手

腾讯元宝
腾讯元宝

腾讯混元平台推出的AI助手

文心一言
文心一言

文心一言是百度开发的AI聊天机器人,通过对话可以生成各种形式的内容。

讯飞写作
讯飞写作

基于讯飞星火大模型的AI写作工具,可以快速生成新闻稿件、品宣文案、工作总结、心得体会等各种文文稿

即梦AI
即梦AI

一站式AI创作平台,免费AI图片和视频生成。

ChatGPT
ChatGPT

最最强大的AI聊天机器人程序,ChatGPT不单是聊天机器人,还能进行撰写邮件、视频脚本、文案、翻译、代码等任务。

相关专题

更多
C# ASP.NET Core微服务架构与API网关实践
C# ASP.NET Core微服务架构与API网关实践

本专题围绕 C# 在现代后端架构中的微服务实践展开,系统讲解基于 ASP.NET Core 构建可扩展服务体系的核心方法。内容涵盖服务拆分策略、RESTful API 设计、服务间通信、API 网关统一入口管理以及服务治理机制。通过真实项目案例,帮助开发者掌握构建高可用微服务系统的关键技术,提高系统的可扩展性与维护效率。

16

2026.03.11

Go高并发任务调度与Goroutine池化实践
Go高并发任务调度与Goroutine池化实践

本专题围绕 Go 语言在高并发任务处理场景中的实践展开,系统讲解 Goroutine 调度模型、Channel 通信机制以及并发控制策略。内容包括任务队列设计、Goroutine 池化管理、资源限制控制以及并发任务的性能优化方法。通过实际案例演示,帮助开发者构建稳定高效的 Go 并发任务处理系统,提高系统在高负载环境下的处理能力与稳定性。

23

2026.03.10

Kotlin Android模块化架构与组件化开发实践
Kotlin Android模块化架构与组件化开发实践

本专题围绕 Kotlin 在 Android 应用开发中的架构实践展开,重点讲解模块化设计与组件化开发的实现思路。内容包括项目模块拆分策略、公共组件封装、依赖管理优化、路由通信机制以及大型项目的工程化管理方法。通过真实项目案例分析,帮助开发者构建结构清晰、易扩展且维护成本低的 Android 应用架构体系,提升团队协作效率与项目迭代速度。

75

2026.03.09

JavaScript浏览器渲染机制与前端性能优化实践
JavaScript浏览器渲染机制与前端性能优化实践

本专题围绕 JavaScript 在浏览器中的执行与渲染机制展开,系统讲解 DOM 构建、CSSOM 解析、重排与重绘原理,以及关键渲染路径优化方法。内容涵盖事件循环机制、异步任务调度、资源加载优化、代码拆分与懒加载等性能优化策略。通过真实前端项目案例,帮助开发者理解浏览器底层工作原理,并掌握提升网页加载速度与交互体验的实用技巧。

95

2026.03.06

Rust内存安全机制与所有权模型深度实践
Rust内存安全机制与所有权模型深度实践

本专题围绕 Rust 语言核心特性展开,深入讲解所有权机制、借用规则、生命周期管理以及智能指针等关键概念。通过系统级开发案例,分析内存安全保障原理与零成本抽象优势,并结合并发场景讲解 Send 与 Sync 特性实现机制。帮助开发者真正理解 Rust 的设计哲学,掌握在高性能与安全性并重场景中的工程实践能力。

218

2026.03.05

PHP高性能API设计与Laravel服务架构实践
PHP高性能API设计与Laravel服务架构实践

本专题围绕 PHP 在现代 Web 后端开发中的高性能实践展开,重点讲解基于 Laravel 框架构建可扩展 API 服务的核心方法。内容涵盖路由与中间件机制、服务容器与依赖注入、接口版本管理、缓存策略设计以及队列异步处理方案。同时结合高并发场景,深入分析性能瓶颈定位与优化思路,帮助开发者构建稳定、高效、易维护的 PHP 后端服务体系。

420

2026.03.04

AI安装教程大全
AI安装教程大全

2026最全AI工具安装教程专题:包含各版本AI绘图、AI视频、智能办公软件的本地化部署手册。全篇零基础友好,附带最新模型下载地址、一键安装脚本及常见报错修复方案。每日更新,收藏这一篇就够了,让AI安装不再报错!

168

2026.03.04

Swift iOS架构设计与MVVM模式实战
Swift iOS架构设计与MVVM模式实战

本专题聚焦 Swift 在 iOS 应用架构设计中的实践,系统讲解 MVVM 模式的核心思想、数据绑定机制、模块拆分策略以及组件化开发方法。内容涵盖网络层封装、状态管理、依赖注入与性能优化技巧。通过完整项目案例,帮助开发者构建结构清晰、可维护性强的 iOS 应用架构体系。

222

2026.03.03

C++高性能网络编程与Reactor模型实践
C++高性能网络编程与Reactor模型实践

本专题围绕 C++ 在高性能网络服务开发中的应用展开,深入讲解 Socket 编程、多路复用机制、Reactor 模型设计原理以及线程池协作策略。内容涵盖 epoll 实现机制、内存管理优化、连接管理策略与高并发场景下的性能调优方法。通过构建高并发网络服务器实战案例,帮助开发者掌握 C++ 在底层系统与网络通信领域的核心技术。

33

2026.03.03

热门下载

更多
网站特效
/
网站源码
/
网站素材
/
前端模板

精品课程

更多
相关推荐
/
热门推荐
/
最新课程
最新Python教程 从入门到精通
最新Python教程 从入门到精通

共4课时 | 22.5万人学习

Django 教程
Django 教程

共28课时 | 4.9万人学习

SciPy 教程
SciPy 教程

共10课时 | 1.9万人学习

关于我们 免责申明 举报中心 意见反馈 讲师合作 广告合作 最新更新
php中文网:公益在线php培训,帮助PHP学习者快速成长!
关注服务号 技术交流群
PHP中文网订阅号
每天精选资源文章推送

Copyright 2014-2026 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号