跳转至

2021

c++中变量和函数的三个重要属性

存储周期类型: 有关变量的创建和销毁 链接类型: 有关变量函数的内存位置 作用域: 有关变量函数的可见范围

本文讨论的标识符,包括变量和函数

存储说明符

存储说明符控制变量何时分配和释放,有以下几种

  • automatic
  • thread_local
  • static
  • register
  • mutable
  • extern

说明 - automatic: 最常见的局部变量,且没有声明为static或者thread_local,位于栈上, 随着代码块的执行和结束而自动分配和销毁 - static: 静态变量, 在程序启动和结束时创建和销毁,但初始化是在第一次执行初始化代码时执行 - thread: 在线程开始和结束时分配和销毁 - dynamic: 最常见的堆上的变量, 需要执行new和delete,

auto 在c++11中不是声明存储周期,而是类型推导符, 但这种存储周期类型的依然存在(局部变量)

初始化的时机

  • automatic: 必须手动初始化,换句话说局部变量必须初始化,否则值为不确定
  • static: 在执行时初始化,且初始化一次,特殊情况下在执行前初始化
  • thread: 因为thread_local变量自带static性质,所以认为其同于static
  • dynamic: 在new时初始化

Linkage

标识符(变量&函数)用一块内存里的值或者函数体来表示的, 而linkage决定其他相同的标识符是否指向同一块内存。c/c++有3种linkage, no-linkage, internal linkage和external linkage

  • no linkage 局部变量没有linkage, 所以两个a是独立的,后面的a会覆盖前面的a,不相干。此时linkage与可见域(scope)类似
  • internal linkage 表示只能在文件内部访问(file scope),换句话就是不会暴露给链接器, 用修饰符static声明internal linkage,所以允许在不同文件声明两个名称&类型相同的internal linkage 标识符,他们指向不同的内存单元。
  • external linkage 表示可以在程序所有地方访问,包括外部文件(global scope),所以是真“全局”(scope&linkage), 所有标识符指向独一份内存。

修饰符

  • 全局const变量和全局constexpr变量默认具备internal linkage, 再加上static没有影响
  • 全局非const变量默认是external linkage, 故再加上extern没有影响。在其他文件使用extern声明这个变量,就能使用指向同一内存的变量
  • 函数默认external linkage,故再加上extern没有影响。 在其他文件使用extern声明这个函数(可省),就能使用指向同一内存的函数
  • 使用extern修饰全局const变量和constexpr变量可以使起具备external linkage

可见staticextern即表示存储周期,又表示linkage, static相对简单,extern则比较复杂,如以下情况

int g_x = 1; // 定义有初始化的全局变量(可加可不加extern)
int g_x; // 定义没有初始化的全局变量(不可加extern),可选初始化
extern int g_x; // 前置声明一个全局变量,不可初始化

extern const int g_y { 1 }; // 定义全局常量,const必须初始化
extern const int g_y; // 前置声明全局常量,不可初始化

所以若是定义未初始化的全局变量,不能加extern,不然就成了前置声明了。

constexpr 特殊情况

虽然通过给constexpr添加extern修饰符来让其具备external属性,但不能在其他文件前置声明。因为constexpr是在编译期替换的,编译器(compile)的可见域限定在文件内,所以编译期无法知道constexpr的值,所以在编译期无法获取到其内存单元的值, 也就无法在其他文件进行声明,只能定义。

file scope和global scope

局部变量的scope、no-linkage以及duration相同,从{开始到}结束。 理论上global scope涵盖了file scope。而linkage来规定其是否能在其他文件里使用。

local class

local class 不允许有static data member

参考

https://en.cppreference.com/w/cpp/language/storage_duration

Linux 调优

系统原厂商是不喜欢讨论系统调优的,一方面说起来没完没了,二来比较复杂,而且私以为调优即说明系统默认不够好?

而且SUSE的原厂规定:

原理机制的介绍及系统调优并不在我们的技术支持范畴

这里是一点相关介绍

buffer/cache 的作用和区别

buffer是用于存放将要输出到disk(块设备)的数据,而cache是存放从disk上读出的数据。二者都是为提高IO性能而设计的。
- buffer:缓冲将数据缓冲下来,解决速度慢和快的交接问题;速度快的需要通过缓冲区将数据一点一点传给速度慢的区域。
例如:从内存中将数据往硬盘中写入,并不是直接写入,而是缓冲到一定大小之后刷入硬盘中。
A buffer is something that has yet to be "written" to disk.

  • cache:缓存实现数据的重复使用,速度慢的设备需要通过缓存将经常要用到的数据缓存起来,缓存下来的数据可以提供高速的传输速度给速度快的设备。
    例如:将硬盘中的数据读取出来放在内存的缓存区中,这样以后再次访问同一个资源,速度会快很多。
    A cache is something that has been "read" from the disk and stored for later use.

总之buff和cache都是内存和硬盘之间的过渡,前者是写入磁盘方向,而后者是写入内存方向

回收cache

drop_caches回收一下。
#sync;sync;sync
#echo 3 > /proc/sys/vm/drop_caches    
free增加300M

swap 介绍

Swap意思是交换分区,是硬盘中的一个分区。内核将内存Page移出内存到swap分区(swap out)

swap通过 vm.swappiness 这个内核参数控制,默认值是60。cat /proc/sys/vm/swappiness 可以查看当前值
这个参数控制内核使用swap的优先级。该参数从0到100。

设置该参数为0,表示只要有可能就尽力避免交换进程移出物理内存;
设置该参数为100,这告诉内核疯狂的将swapout物理内存移到swap分区。 注意:设置该参数为0,并不代表禁用swap分区,只是告诉内核,能少用到swap分区就尽量少用到,设置vm.swappiness=100的话,则表示尽量使用swap分区。

这里面涉及到当然还涉swappiness及到复杂的算法。如果以为所有物理内在用完之后,再使用swap, 实事并不是这样。以前曾经遇到过,物理内存只剩下10M了,但是依然没有使用Swap交换空间,另外一台服务器,物理内存还剩下15G,居然用了一点点Swap交换空间。 其实少量使用Swap交换空间是不会影响性能,只有当内存资源出现瓶颈或者内存泄露,进程异常时导致频繁、大量使用交换分区才会导致严重性能问题。

问题:何时使用swap

这个问题如上面说的,比较难说,理论上是当物理内存不够用的时候,又需要读入内存时,会将一些长时间不用的程序的内存Page 交换出去。
但是很多时候会发现,内核即使在内存充足的情况下也是使用到swap

问题: 那些东西被swap了?

可以看下面的测试

回收swap

swapoff 之后执行sudo sysctl vm.swappiness=0 临时让内核不用swapout

并把swap的数据加载内存,并重启swap 
#swapoff -a
#swapon -a
即把swap分区清空, 自测效果如下,内核版本5.10.0-8-amd64

               total        used        free      shared  buff/cache   available
Mem:        12162380     4911564     5605744      459364     1645072     6466572
Swap:        1000444      763040      237404

重启swap后

               total        used        free      shared  buff/cache   available
Mem:        12162380     5605800     4843176      524984     1713404     5707112
Swap:        1000444           0     1000444

可见,停用swap后,swap的used大部分到了mem的used,小部分到了Mem的shared

调优的一些有效工具

perf + flame火焰图: 查看运行耗时,可以查看函数调用耗时,如果是自己的程序,可以知道哪些函数需要优化 vmstat 查看磁盘io情况,使用vmstat -t 3命令,如果b状态的数字一直很大,那么说明磁盘阻塞严重,可能是磁盘坏了,可能是程序设计不合理

还有top,iperf等等

ddns

code

    """
    更新
    """
    parser = ArgumentParser(description=__description__,
                            epilog=__doc__, formatter_class=RawTextHelpFormatter)
    parser.add_argument('-v', '--version',
                        action='version', version=__version__)
    parser.add_argument('-c', '--config',
                        default="config.json", help="run with config file [配置文件路径]")
    config_file = parser.parse_args().config
    get_config(path=config_file)
    # Dynamicly import the dns module as configuration
    dns_provider = str(get_config('dns', 'dnspod').lower())
    dns = getattr(__import__('dns', fromlist=[dns_provider]), dns_provider)
    dns.Config.ID = get_config('id')
    dns.Config.TOKEN = get_config('token')
    dns.Config.TTL = get_config('ttl')
    if get_config('debug'):
        ip.DEBUG = get_config('debug')
        basicConfig(
            level=DEBUG,
            format='%(asctime)s <%(module)s.%(funcName)s> %(lineno)d@%(pathname)s \n[%(levelname)s] %(message)s')
        print("DDNS[", __version__, "] run:", os_name, sys.platform)
        print("Configuration was loaded from <==", path.abspath(config_file))
        print("=" * 25, ctime(), "=" * 25, sep=' ')

    proxy = get_config('proxy') or 'DIRECT'
    proxy_list = proxy.strip('; ') .split(';')

    cache = get_config('cache', True) and Cache(CACHE_FILE)
    if cache is False:
        info("Cache is disabled!")
    elif get_config.time >= cache.time:
        warning("Cache file is out of dated.")
        cache.clear()
    elif not cache:
        debug("Cache is empty.")
    update_ip('4', cache, dns, proxy_list)
    update_ip('6', cache, dns, proxy_list)


if __name__ == '__main__':
    main()
{
  "$schema": "https://ddns.newfuture.cc/schema/v2.8.json",
  "id": "",
  "token": "",
  "dns": "alidns",
  "ipv4": ["", ""],
  "index4": "public",
  "ttl": 600,
  "proxy": "DIRECT",
  "debug": false
}

grpc callback api

C++ callback-based asynchronous API

  • Author(s): vjpai, sheenaqotj, yang-g, zhouyihaiding
  • Approver: markdroth
  • Status: Proposed
  • Implemented in: https://github.com/grpc/grpc/projects/12
  • Last updated: March 22, 2021
  • Discussion at https://groups.google.com/g/grpc-io/c/rXLdWWiosWg

Abstract

Provide an asynchronous gRPC API for C++ in which the completion of RPC actions in the library will result in callbacks to user code,

Background

Since its initial release, gRPC has provided two C++ APIs:

  • Synchronous API
  • All RPC actions (such as unary calls, streaming reads, streaming writes, etc.) block for completion
  • Library provides a thread-pool so that each incoming server RPC executes its method handler in its own thread
  • Completion-queue-based (aka CQ-based) asynchronous API
  • Application associates each RPC action that it initiates with a tag
  • The library performs each RPC action
  • The library posts the tag of a completed action onto a completion queue
  • The application must poll the completion queue to determine which asynchronously-initiated actions have completed
  • The application must provide and manage its own threads
  • Server RPCs don't have any library-invoked method handler; instead the application is responsible for executing the actions for an RPC once it is notified of an incoming RPC via the completion queue

The goal of the synchronous version is to be easy to program. However, this comes at the cost of high thread-switching overhead and high thread storage for systems with many concurrent RPCs. On the other hand, the asynchronous API allows the application full control over its threading and thus can scale further. The biggest problem with the asynchronous API is that it is just difficult to use. Server RPCs must be explicitly requested, RPC polling must be explicitly controlled by the application, lifetime management is complicated, etc. These have proved sufficiently difficult that the full features of the asynchronous API are basically never used by applications. Even if one can use the async API correctly, it also presents challenges in deciding how many completion queues to use and how many threads to use for polling them, as one can either optimize for reducing thread hops, avoiding stranding, reducing CQ contention, or improving locality. These goals are often in conflict and require substantial tuning.

  • The C++ callback API has an implementation that is built on top of a new callback completion queue in core. There is also another implementation, discussed below.
  • The API structure has substantial similarities to the gRPC-Node and gRPC-Java APIs.

Proposal

The callback API is designed to have the performance and thread scalability of an asynchronous API without the burdensome programming model of the completion-queue-based model. In particular, the following are fundamental guiding principles of the API:

  • Library directly calls user-specified code at the completion of RPC actions. This user code is run from the library's own threads, so it is very important that it must not wait for completion of any blocking operations (e.g., condition variable waits, invoking synchronous RPCs, blocking file I/O).
  • No explicit polling required for notification of completion of RPC actions.
  • In practice, these requirements mean that there must be a library-controlled poller for monitoring such actions. This is discussed in more detail in the Implementation section below.
  • As in the synchronous API, server RPCs have an application-defined method handler function as part of their service definition. The library invokes this method handler when a new server RPC starts.
  • Like the synchronous API and unlike the completion-queue-based asynchronous API, there is no need for the application to "request" new server RPCs. Server RPC context structures will be allocated and have their resources allocated as and when RPCs arrive at the server.

Reactor model

The most general form of the callback API is built around a reactor model. Each type of RPC has a reactor base class provided by the library. These types are:

  • ClientUnaryReactor and ServerUnaryReactor for unary RPCs
  • ClientBidiReactor and ServerBidiReactor for bidi-streaming RPCs
  • ClientReadReactor and ServerWriteReactor for server-streaming RPCs
  • ClientWriteReactor and ServerReadReactor for client-streaming RPCs

Client RPC invocations from a stub provide a reactor pointer as one of their arguments, and the method handler of a server RPC must return a reactor pointer.

These base classes provide three types of methods:

  1. Operation-initiation methods: start an asynchronous activity in the RPC. These are methods provided by the class and are not virtual. These are invoked by the application logic. All of these have a void return type. The ReadMessageType below is the request type for a server RPC and the response type for a client RPC; the WriteMessageType is the response type for a server RPC or the request type for a client RPC.
  2. void StartCall(): (Client only) Initiates the operations of a call from the client, including sending any client-side initial metadata associated with the RPC. Must be called exactly once. No reads or writes will actually be started until this is called (i.e., any previous calls to StartRead, StartWrite, or StartWritesDone will be queued until StartCall is invoked). This operation is not needed at the server side since streaming operations at the server are released from backlog automatically by the library as soon as the application returns a reactor from the method handler, and because there is a separate method just for sending initial metadata.
  3. void StartSendInitialMetadata(): (Server only) Sends server-side initial metadata. To be used in cases where initial metadata should be sent without sending a message. Optional; if not called, initial metadata will be sent when StartWrite or Finish is called. May not be invoked more than once or after StartWrite or Finish has been called. This does not exist at the client because sending initial metadata is part of StartCall.
  4. void StartRead(ReadMessageType*): Starts a read of a message into the object pointed to by the argument. OnReadDone will be invoked when the read is complete. Only one read may be outstanding at any given time for an RPC (though a read and a write can be concurrent with each other). If this operation is invoked by a client before calling StartCall or by a server before returning from the method handler, it will be queued until one of those events happens and will not actually trigger any activity or reactions until it is thereby released from the queue.
  5. void StartWrite(const WriteMessageType*): Starts a write of the object pointed to by the argument. OnWriteDone will be invoked when the write is complete. Only one write may be outstanding at any given time for an RPC (though a read and a write can be concurrent with each other). As with StartRead, if this operation is invoked by a client before calling StartCall or by a server before returning from the method handler, it will be queued until one of those events happens and will not actually trigger any activity or reactions until it is thereby released from the queue.
  6. void StartWritesDone(): (Client only) For client RPCs to indicate that there are no more writes coming in this stream. OnWritesDoneDone will be invoked when this operation is complete. This causes future read operations on the server RPC to indicate that there is no more data available. Highly recommended but technically optional; may not be called more than once per call. As with StartRead and StartWrite, if this operation is invoked by a client before calling StartCall or by a server before returning from the method handler, it will be queued until one of those events happens and will not actually trigger any activity or reactions until it is thereby released from the queue.
  7. void Finish(Status): (Server only) Sends completion status to the client, asynchronously. Must be called exactly once for all server RPCs, even for those that have already been cancelled. No further operation-initiation methods may be invoked after Finish.
  8. Operation-completion reaction methods: notification of completion of asynchronous RPC activity. These are all virtual methods that default to an empty function (i.e., {}) but may be overridden by the application's reactor definition. These are invoked by the library. All of these have a void return type. Most take a bool ok argument to indicate whether the operation completed "normally," as explained below.
  9. void OnReadInitialMetadataDone(bool ok): (Client only) Invoked by the library to notify that the server has sent an initial metadata response to a client RPC. If ok is true, then the RPC received initial metadata normally. If it is false, there is no initial metadata either because the call has failed or because the call received a trailers-only response (which means that there was no actual message and that any information normally sent in initial metadata has been dispatched instead to trailing metadata, which is allowed in the gRPC HTTP/2 transport protocol). This reaction is automatically invoked by the library for RPCs of all varieties; it is uncommonly used as an application-defined reaction however.
  10. void OnReadDone(bool ok): Invoked by the library in response to a StartRead operation. The ok argument indicates whether a message was read as expected. A false ok could mean a failed RPC (e.g., cancellation) or a case where no data is possible because the other side has already ended its writes (e.g., seen at the server-side after the client has called StartWritesDone).
  11. void OnWriteDone(bool ok): Invoked by the library in response to a StartWrite operation. The ok argument that indicates whether the write was successfully sent; a false value indicates an RPC failure.
  12. void OnWritesDoneDone(bool ok): (Client only) Invoked by the library in response to a StartWritesDone operation. The bool ok argument that indicates whether the writes-done operation was successfully completed; a false value indicates an RPC failure.
  13. void OnCancel(): (Server only) Invoked by the library if an RPC is canceled before it has a chance to successfully send status to the client side. The reaction may be used for any cleanup associated with cancellation or to guide the behavior of other parts of the system (e.g., by setting a flag in the service logic associated with this RPC to stop further processing since the RPC won't be able to send outbound data anyway). Note that servers must call Finish even for RPCs that have already been canceled as this is required to cleanup all their library state and move them to a state that allows for calling OnDone.
  14. void OnDone(const Status&) at the client, void OnDone() at the server: Invoked by the library when all outstanding and required RPC operations are completed for a given RPC. For the client-side, it additionally provides the status of the RPC (either as sent by the server with its Finish call or as provided by the library to indicate a failure), in which case the signature is void OnDone(const Status&). The server version has no argument, and thus has a signature of void OnDone(). Should be used for any application-level RPC-specific cleanup.
  15. Thread safety: the above calls may take place concurrently, except that OnDone will always take place after all other reactions. No further RPC operations are permitted to be issued after OnDone is invoked.
  16. IMPORTANT USAGE NOTE : code in any reaction must not block for an arbitrary amount of time since reactions are executed on a finite-sized, library-controlled threadpool. If any long-term blocking operations (like sleeps, file I/O, synchronous RPCs, or waiting on a condition variable) must be invoked as part of the application logic, then it is important to push that outside the reaction so that the reaction can complete in a timely fashion. One way of doing this is to push that code to a separate application-controlled thread.
  17. RPC completion-prevention methods. These are methods provided by the class and are not virtual. They are only present at the client-side because the completion of a server RPC is clearly requested when the application invokes Finish. These methods are invoked by the application logic. All of these have a void return type.
  18. void AddHold(): (Client only) This prevents the RPC from being considered complete (ready for OnDone) until each AddHold on an RPC's reactor is matched to a corresponding RemoveHold. An application uses this operation before it performs any extra-reaction flows, which refers to streaming operations initiated from outside a reaction method. Note that an RPC cannot complete before StartCall, so holds are not needed for any extra-reaction flows that take place before StartCall. As long as there are any holds present on an RPC, though, it may not have OnDone called on it, even if it has already received server status and has no other operations outstanding. May be called 0 or more times on any client RPC.
  19. void AddMultipleHolds(int holds): (Client only) Shorthand for holds invocations of AddHold .
  20. void RemoveHold(): (Client only) Removes a hold reference on this client RPC. Must be called exactly as many times as AddHold was called on the RPC, and may not be called more times than AddHold has been called so far for any RPC. Once all holds have been removed, the server has provided status, and all outstanding or required operations have completed for an RPC, the library will invoke OnDone for that RPC.

Examples are provided in the PR to de-experimentalize the callback API.

Unary RPC shortcuts

As a shortcut, client-side unary RPCs may bypass the reactor model by directly providing a std::function for the library to call at completion rather than a reactor object pointer. This is passed as the final argument to the stub call, just as the reactor would be in the more general case. This is semantically equivalent to a reactor in which the OnDone function simply invokes the specified function (but can be implemented in a slightly faster way since such an RPC will definitely not wait separately for initial metadata from the server) and all other reactions are left empty. In practice, this is the common and recommended model for client-side unary RPCs, unless they have a specific need to wait for initial metadata before getting their full response message. As in the reactor model, the function provided as a callback may not include operations that block for an arbitrary amount of time.

Server-side unary RPCs have the option of returning a library-provided default reactor when their method handler is invoked. This is provided by calling DefaultReactor on the CallbackServerContext. This default reactor provides a Finish method, but does not provide a user callback for OnCancel and OnDone. In practice, this is the common and recommended model for most server-side unary RPCs unless they specifically need to react to an OnCancel callback or do cleanup work after the RPC fully completes.

ServerContext extensions

ServerContext is now made a derived class of ServerContextBase. There is a new derived class of ServerContextBase called CallbackServerContext which provides a few additional functions:

  • ServerUnaryReactor* DefaultReactor() may be used by a method handler to return a default reactor from a unary RPC.
  • RpcAllocatorState* GetRpcAllocatorState: see advanced topics section

Additionally, the AsyncNotifyWhenDone function is not present in the CallbackServerContext.

All method handler functions for the callback API take a CallbackServerContext* as their first argument. ServerContext (used for the sync and CQ-based async APIs) and CallbackServerContext (used for the callback API) actually use the same underlying structure and thus their object pointers are meaningfully convertible to each other via a static_cast to ServerContextBase*. We recommend that any helper functions that need to work across API variants should use a ServerContextBase pointer or reference as their argument rather than a ServerContext or CallbackServerContext pointer or reference. For example, ClientContext::FromServerContext now uses a ServerContextBase* as its argument; this is not a breaking API change since the argument is now a parent class of the previous argument's class.

Advanced topics

Application-managed server memory allocation

Callback services must allocate an object for the CallbackServerContext and for the request and response objects of a unary call. Applications can supply a per-method custom memory allocator for gRPC server to use to allocate and deallocate the request and response messages, as well as a per-server custom memory allocator for context objects. These can be used for purposes like early or delayed release, freelist-based allocation, or arena-based allocation. For each unary RPC method, there is a generated method in the server called SetMessageAllocatorFor_*MethodName* . For each server, there is a method called SetContextAllocator. Each of these has numerous classes involved, and the best examples for how to use these features lives in the gRPC tests directory.

Generic (non-code-generated) services

RegisterCallbackGenericService is a new method of ServerBuilder to allow for processing of generic (unparsed) RPCs. This is similar to the pre-existing RegisterAsyncGenericService but uses the callback API and reactors rather than the CQ-based async API. It is expected to be used primarily for generic gRPC proxies where the exact serialization format or list of supported methods is unknown.

Per-method specification

Just as with async services, callback services may also be specified on a method-by-method basis (using the syntax WithCallbackMethod_*MethodName*), with any unlisted methods being treated as sync RPCs. The shorthand CallbackService declares every method as being processed by the callback API. For example:

  • Foo::Service -- purely synchronous service
  • Foo::CallbackService -- purely callback service
  • Foo::WithCallbackMethod_Bar<Service> -- synchronous service except for callback method Bar
  • Foo::WithCallbackMethod_Bar<WithCallbackMethod_Baz<Service>> -- synchronous service except for callback methods Bar and Baz

Rationale

Besides the content described in the background section, the rationale also includes early and consistent user demand for this feature as well as the fact that many users were simply spinning up a callback model on top of gRPC's completion queue-based asynchronous model.

Implementation

There is more than one mechanism available for implementing the background polling required by the C++ callback API. One has been implemented on top of the C++ completion queue API. In this approach, the callback API uses a number of library-owned threads to call Next on an async CQ that is owned by the internal implementation. Currently, the thread count is automatically selected by the library with no user input and is set to half the system's core count, but no less than 2 and no more than 16. This selection is subject to change in the future based on our team's ongoing performance analysis and tuning efforts. Despite being built on the CQ-based async API, the developer using the callback API does not need to consider any of the CQ details (e.g., shutdown, polling, or even the existence of a CQ).

It is the gRPC team's intention that that implementation is only a temporary solution. A new structure called an EventEngine is being developed to provide the background threads needed for polling, and this sytem is also intended to provide a direct API for application use. This event engine would also allow the direct use of the core callback API that is currently only used by the Python async implementation. If this solution is adopted, there will be a new gRFC for it. This new implementation will not change the callback API at all but rather will only affect its performance. The C++ code for the callback API already has if branches in place to support the use of a poller that directly supplies the background threads, so the callback API will naturally layer on top of the EventEngine without further development effort.

Open issues (if applicable)

N/A. The gRPC C++ callback API has been used internally at Google for two years now, and the code and API have evolved substantially during that period.

rpc

rpc 意为远程过程调用, http, grpc 广义上讲都是rpc。 而且还有个项目叫grpc-gateway, 可以将grpc通过http的方式暴露。

grpc

grpc 是rpc的一种实现,由google开源,其他还有thrift, sogorpc 等等。 并且grpc使用的http/2协议

http/1.1 与 http/2 的区别

  • 2使用二进制,而1.1使用文本,提高效率
  • 2将相同的tcp连接合并为一个请求,提高性能,而1.1则为每个请求创建tcp连接
  • 2的客户端使用流,这样可以多次请求
  • 2含有trailers,也就是尾部消息,可以用来发送body的checksume等, 当然也可以直接放到body里 ...

而1.1中也已经实现服务端到客户端的流,使用'Transfer-Encoding=chunked'来替代'Content-Length',详见rfc

 A sender MUST NOT send a Content-Length header field in any message
   that contains a Transfer-Encoding header field.

认识proto文件

proto 文件中多个service和单个service 区别

在同一个service里的方法会codegen到同一个类,但这个类比较鸡肋。 由于RPC调用是RESTful的,所以多次调用或者多个rpc方法无法通过同一个service来共享数据,这需要使用者借助其他办法来解决。

service 还可以用以隔离相同名称的rpc, 如 - service1/helloworld - service2/helloworld

而方法和方法通过RpcServiceMethod来保存,而通过index来调用

::grpc::Service::RequestAsyncUnary(0, context, request, response, new_call_cq, notification_cq, tag);
::grpc::Service::RequestAsyncUnary(1, context, request, response, new_call_cq, notification_cq, tag);

rpc 声明UnaryCall&StreamingCall

非流调用也称为UnaryCall,指发送或接受的消息大小是固定的。 流调用称为StreamingCall,可以多次发送或者接收,所以消息大小并不固定。

StreamCall 可以多次调用,直到发送WriteDone/Finish,所以在接受的一端总是

while(read stream){}

grpc支持客户端流服务端非流、客户端非流、服务端流以及双向流,而普通的就是客户端和服务端都不流NORMAL_RPC(unary call) - grpc::internal::RpcMethod::NORMAL_RPC - grpc::internal::RpcMethod::RpcType::SERVER_STREAMING - grpc::internal::RpcMethod::RpcType::CLIENT_STREAMING - grpc::internal::RpcMethod::RpcType::BIDI_STREAMING

认识pb.h和grpc.pb.h文件

protoc 调用grpc_cpp_plugin 插件生成grpc.pb.{h,cc}文件,生成rpc方法的实现

pb.{h,cc}则是定义了protobuf消息的序列化和反序列化方法

反射、序列化和反序列化的实现

pb.h 实现grpc的请求参数和返回参数的特定语言的解析,还有pb的通用方法, 例如: has_xx(版本3里只有自定义类型才支持), class XXX_CPP_API

生成的class都继承自google::protobuf::Message

class HelloRequest PROTOBUF_FINAL :
      public ::PROTOBUF_NAMESPACE_ID::Message

#define PROTOBUF_NAMESPACE "google::protobuf"
#define PROTOBUF_NAMESPACE_ID google::protobuf 
而在message中有注释说明, 关键函数是SerializeToStringParseFromString,还有个array版本SerializeToArray,
还有一个反射函数GetDescriptor()用来动态获取指定槽位的数据
// Example usage:
  //
  // Say you have a message defined as:
  //
  //   message Foo {
  //     optional string text = 1;
  //     repeated int32 numbers = 2;
  //   }
  //
  // Then, if you used the protocol compiler to generate a class from the above
  // definition, you could use it like so:
  //
  //   std::string data;  // Will store a serialized version of the message.
  //
  //   {
  //     // Create a message and serialize it.
  //     Foo foo;
  //     foo.set_text("Hello World!");
  //     foo.add_numbers(1);
  //     foo.add_numbers(5);
  //     foo.add_numbers(42);
  //
  //     foo.SerializeToString(&data);
  //   }
  //
  //   {
  //     // Parse the serialized message and check that it contains the
  //     // correct data.
  //     Foo foo;
  //     foo.ParseFromString(data);
  //
  //     assert(foo.text() == "Hello World!");
  //     assert(foo.numbers_size() == 3);
  //     assert(foo.numbers(0) == 1);
  //     assert(foo.numbers(1) == 5);
  //     assert(foo.numbers(2) == 42);
  //   }

如下可以将Message转换为基本类型

int size = reqMsg.ByteSizeLong();
char* array = new char[size];
reqMsg.SerializeToArray(array, size);

std::string bytes = reqMsg.SerializeAsString();
const char* array = bytes.data();
int size = bytes.size();

进一步看protobuf::message继承自protobuf::message_lite, 后者实现了SerializeAsStringSerializeToArray

inline uint8* SerializeToArrayImpl(const MessageLite& msg, uint8* target,
                                     int size) {
    constexpr bool debug = false;
    if (debug) {
      // Force serialization to a stream with a block size of 1, which forces
      // all writes to the stream to cross buffers triggering all fallback paths
      // in the unittests when serializing to string / array.
      io::ArrayOutputStream stream(target, size, 1);
      uint8* ptr;
      io::EpsCopyOutputStream out(
          &stream, io::CodedOutputStream::IsDefaultSerializationDeterministic(),
          &ptr);
      ptr = msg._InternalSerialize(ptr, &out);
      out.Trim(ptr);
      GOOGLE_DCHECK(!out.HadError() && stream.ByteCount() == size);
      return target + size;
    } else {
      io::EpsCopyOutputStream out(
          target, size,
          io::CodedOutputStream::IsDefaultSerializationDeterministic());
实际调用->    auto res = msg._InternalSerialize(target, &out);
      GOOGLE_DCHECK(target + size == res);
      return res;
    }
  }
可见,其实序列化最终调用的是pb.h文件里定义的_InternalSerialize, 举例官方例子HelloRequest
 ::PROTOBUF_NAMESPACE_ID::uint8* HelloRequest::_InternalSerialize(
      ::PROTOBUF_NAMESPACE_ID::uint8* target, ::PROTOBUF_NAMESPACE_ID::io::EpsCopyOutputStream*   stream) const {
    // @@protoc_insertion_point(serialize_to_array_start:helloworld.HelloRequest)
    ::PROTOBUF_NAMESPACE_ID::uint32 cached_has_bits = 0;
    (void) cached_has_bits;

    // string name = 1;
    if (this->name().size() > 0) {
      ::PROTOBUF_NAMESPACE_ID::internal::WireFormatLite::VerifyUtf8String(
        this->_internal_name().data(), static_cast<int>(this->_internal_name().length()),
        ::PROTOBUF_NAMESPACE_ID::internal::WireFormatLite::SERIALIZE,
        "helloworld.HelloRequest.name");
      target = stream->WriteStringMaybeAliased(
          1, this->_internal_name(), target);
    }

    if (PROTOBUF_PREDICT_FALSE(_internal_metadata_.have_unknown_fields())) {
      target = ::PROTOBUF_NAMESPACE_ID::internal::WireFormat::InternalSerializeUnknownFieldsToA  rray(
         _internal_metadata_.unknown_fields<::PROTOBUF_NAMESPACE_ID::UnknownFieldSet>(::PROTOB  UF_NAMESPACE_ID::UnknownFieldSet::default_instance), target, stream);
    }
    // @@protoc_insertion_point(serialize_to_array_end:helloworld.HelloRequest)
    return target;
  }

grpc.pb生成的代码实现rpc调用

生成的框架代码用来继承实现Service和获取stub来发起rpc call。实际上这些代码并不是必须的
在下面讲了如何使用几个工厂类来创建Stub,还有直接new出Service

class XXXServer {
        // 客户端使用的桩
    class Stub
        // base 
    class Service
    // 各种版本的rpc包装,但都继承自base
        class WithAsyncMethod_XXX
        typedef WithAsyncMethod_XXX<Service > AsyncService;
    typedef ExperimentalWithCallbackMethod_XXX<Service > CallbackService;
    class WithGenericMethod_XXX
    class WithRawMethod_XXX
    typedef WithStreamedUnaryMethod_XXX<Service > StreamedUnaryService;
}

同步与异步

grpc 的异步即为使用cq事件驱动(cq-based),使用tag标记事件。另外还有callback方式

对于客户端

同步时,通过调用'::grpc::internal::BlockingUnaryCall'
异步时,创建'ClientAsyncResponseReader'(非流), 然后通过调用'ClientAsyncResponseReader'的write和finish,并等待tag 当存在流时分别是 - ::grpc::ClientAsyncReader - ::grpc::ClientAsyncWriter - ::grpc::ClientAsyncReaderWriter

这些类型可用对应的工厂类来创建, 生成代码的stub也是这么用的

class ClientReaderFactory 
class ClientWriterFactory 
class ClientReaderWriterFactory 

对于服务端

同步时,通过'AddMethod'来注册,生成代码会在父类构造时执行。注册后由grpc调用

Greeter::Service::Service() {
    AddMethod(new ::grpc::internal::RpcServiceMethod(
        Greeter_method_names[0],
        ::grpc::internal::RpcMethod::NORMAL_RPC,
        new ::grpc::internal::RpcMethodHandler< Greeter::Service, ::helloworld::HelloRequest, ::helloworld::HelloReply>(
            [](Greeter::Service* service,
               ::grpc_impl::ServerContext* ctx,
               const ::helloworld::HelloRequest* req,
               ::helloworld::HelloReply* resp) {
                 return service->SayHello(ctx, req, resp);
               }, this)));
  }

异步时,类似客户端 - grpc::ServerAsyncReaderWriter - grpc::ServerAsyncReader - grpc::ServerAsyncWriter

可见服务端是直接new出来的,异步时这些io操作对象也是直接new出来的, 在调用以下时传入

RequestAsyncBidiStreaming
RequestAsyncClientStreaming
RequestAsyncServerStreaming

grpc callback

只在客户端使用,callback方式的请求可以传入一个lambda, 在请求完成时调用

    stub_->async()->SayHello(&context, &request, &reply,
                             [&mu, &cv, &done, &status](Status s) {
                               status = std::move(s);
                               std::lock_guard<std::mutex> lock(mu);
                               done = true;
                               cv.notify_one();
                             });

新版本的grpc已经将实验性的标记去除,说明此方式成熟了

    #ifdef GRPC_CALLBACK_API_NONEXPERIMENTAL
      ::grpc::Service::
    #else
      ::grpc::Service::experimental().
    #endif

grpc异步流

官方仓库的示例代码没有异步且流的, 在实际项目中用到异步流,使用大概方法 1. 手动创建writereader 2. 启动时,调用'grpc::Service::RequestAsyncBidiStreaming' 和 'grpc::Service::RequestAsyncClientStreaming' 以及'RequestAsyncServerStreaming', 向cq塞请求new_connection事件 3. 收到'new_connection'事件返回后,再调用read事件。

一共有5个类型

new_connection, read, write, finish, done
我写了一个demo grpcstreamhelloworld

grpc 消息大小

老版本的grpc中,发送端是支持无限大小的,但接受端只能是4M

#define GRPC_DEFAULT_MAX_SEND_MESSAGE_LENGTH -1
#define GRPC_DEFAULT_MAX_RECV_MESSAGE_LENGTH (4 * 1024 * 1024)
服务端代码
std::unique_ptr<Server> ServerBuilder::BuildAndStart() {
    if (max_receive_message_size_ >= 0) {
      args.SetInt(GRPC_ARG_MAX_RECEIVE_MESSAGE_LENGTH, max_receive_message_size_);
    }

但在新版grpc中变了

  std::unique_ptr<grpc::Server> ServerBuilder::BuildAndStart() {
    grpc::ChannelArguments args;
    if (max_receive_message_size_ >= -1) {
      args.SetInt(GRPC_ARG_MAX_RECEIVE_MESSAGE_LENGTH, max_receive_message_size_);
    }
    if (max_send_message_size_ >= -1) {
      args.SetInt(GRPC_ARG_MAX_SEND_MESSAGE_LENGTH, max_send_message_size_);
    }

grpc 编译安装的问题

https://github.com/grpc/grpc/issues/13841

grpc异步存在问题

因为异步服务端通过completionqueue来通知rpc执行结果和执行下次调用,通常使用多queue和多线程的方式提高处理效率 1. 通常情况是多queue, 即每个service对应一个queue, 而每个service又有多个rpc,线程去轮询这个complete_queue。这样导致高线程切换开销,而且complete_queue也占用大量内存 2. 多线程,queue可以用多个线程去轮询,但0.13版本可能出现bug

grpc异步流存在的问题

grpc区别与其他框架很大一个优势是支持异步流,即可以多次请求和多次回复。异步是基于cq的事件驱动,所以必须等待tag回调, 连续两次发送会异常。 而真正的请求一般在业务模块处理, 不知道tag的状态即不知道是否正在发送, 那么如何在cq回调外发送消息呢?

办法是维护一个发送队列,消息先存队列里,等待cq回调时取出发送。 另外由于流同步需要显式发送结束标记(服务端调Stream::Finish, 客户端调用WriteDown和Finish), 所以需要有一个特殊消息加以区分,通常用空指针,也可以设置结束标志。另外由于发送代码会同时被业务调用和cq回调,需要对发送代码加锁

调试grpc

通过设置环境变量,让grpc向控制台打印详细信息

export GRPC_VERBOSITY=DEBUG
bash-5.0# ./build/bin/hasync slave  stdin stdout @127.0.0.1:7615
D1026 08:27:44.142802149   24658 ev_posix.cc:174]            Using polling engine: epollex
D1026 08:27:44.143406685   24658 dns_resolver_ares.cc:490]   Using ares dns resolver
I1026 08:27:44.158115785   24658 server_builder.cc:332]      Synchronous server. Num CQs: 1, Min pollers: 1, Max Pollers: 2, CQ timeout (msec): 10000

项目实践

项目使用客户端异步/同步,服务端全异步, 可以兼容四种传输方式

引用

https://grpc.github.io/grpc/cpp/grpcpp_2impl_2codegen_2sync__stream_8h_source.html https://grpc.github.io/grpc/cpp/grpcpp_2impl_2codegen_2byte__buffer_8h_source.html https://grpc.github.io/grpc/cpp/call__op__set_8h_source.html

前言

偶尔看到自己或者客户的/var/log/message 日志出现segfault, 查询了一下相关信息

Apr  6 09:43:37 icm kernel: rhsm-icon[13402]: segfault at 12b0000 ip 0000003c89845c00 sp 00007ffce18396e0 error 4 in libglib-2.0.so.0.2800.8[3c89800000+115000]

解释

  • address (after the at) - the location in memory the code is trying to access (it's likely that 10 and 11 are offsets from a pointer we expect to be set to a valid value but which is instead pointing to 0)
  • ip - instruction pointer, ie. where the code which is trying to do this lives
  • sp - stack pointer
  • error - An error code for page faults; see below for what this means on x86.
    /*
     * Page fault error code bits:
     *
     *   bit 0 ==    0: no page found       1: protection fault
     *   bit 1 ==    0: read access         1: write access
     *   bit 2 ==    0: kernel-mode access  1: user-mode access
     *   bit 3 ==                           1: use of reserved bit detected
     *   bit 4 ==                           1: fault was an instruction fetch
     */
    

message日志

使用dmesg打印ring buffer的内容,关于硬件和i/o的信息

coredump

 A core file is an image of a process that has crashed It contains all process information pertinent to debugging: contents of hardware registers, process status, and process data. Gdb will allow you use this file to determine where your program crashed. 

复现

void foo(){
    int *p = 0;
    *p = 100;
}

int main(){
  foo();
}
[ 5902.293905] a.out[6085]: segfault at 0 ip 000055c0eddca129 sp 00007ffe65372110 error 6 in a.out[55c0eddca000+1000]
[ 5902.293916] Code: 00 c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 f3 0f 1e fa e9 67 ff ff ff 55 48 89 e5 48 c7 45 f8 00 00 00 00 48 8b 45 f8 <c7> 00 64 00 00 00 90 5d c3 55 48 89 e5 b8 00 00 00 00 e8 d9 ff ff
(gdb) info registers 
rax            0x0                 0
rbx            0x55c0eddca150      94287112741200
rcx            0x7faa085eb598      140368261592472
rdx            0x7ffe65372228      140730596532776
rsi            0x7ffe65372218      140730596532760
rdi            0x1                 1
rbp            0x7ffe65372110      0x7ffe65372110
rsp            0x7ffe65372110      0x7ffe65372110
r8             0x0                 0
r9             0x7faa08621070      140368261812336
r10            0x69682ac           110527148
r11            0x202               514
r12            0x55c0eddca020      94287112740896
r13            0x0                 0
r14            0x0                 0
r15            0x0                 0
rip            0x55c0eddca129      0x55c0eddca129 <foo+16>
eflags         0x10246             [ PF ZF IF RF ]
cs             0x33                51
ss             0x2b                43
ds             0x0                 0
es             0x0                 0
fs             0x0                 0
gs             0x0                 0

addr2line

addr2line -e yourSegfaultingProgram 00007f9bebcca90d

cmake 学习

add_custom_command 用法

用来定义自定义的方法, 而且有2套签名或者说触发规则

add_custom_target 配合使用, 用于生成文件

这种情况下,add_custom_target 续要在add_custom_command之后出现。 语法

add_custom_command(OUTPUT output1 [output2 ...]
                   COMMAND command1 [ARGS] [args1...]
                   [COMMAND command2 [ARGS] [args2...] ...]
                   [MAIN_DEPENDENCY depend]
                   [DEPENDS [depends...]]
                   [BYPRODUCTS [files...]]
                   [IMPLICIT_DEPENDS <lang1> depend1
                                    [<lang2> depend2] ...]
                   [WORKING_DIRECTORY dir]
                   [COMMENT comment]
                   [DEPFILE depfile]
                   [JOB_POOL job_pool]
                   [VERBATIM] [APPEND] [USES_TERMINAL]
                   [COMMAND_EXPAND_LISTS])

类似make的语法规则

target:dependency
  command
如果dependency不存在,就去找生成依赖本身的规则,没有也生成依赖的规则,那么make会停止。

如下的例子

 cmake_minimum_required(VERSION 3.5)
 project(test)
 add_executable(${PROJECT_NAME} main.c)
 add_custom_command(OUTPUT printout 
                    COMMAND ${CMAKE_COMMAND} -E echo compile finish
                    VERBATIM
                   )
 add_custom_target(finish
                   DEPENDS printout
                   )

finish 依赖 printout, 而add_custom_command定义了printout的规则,printout即为下面的command执行的输出

所以当生成finish目标的时候会触发上面的add_custom_command

其实这种情况下, 直接将add_custome_command的command写到add_custome_target中也是一样的效果

command-line-tool

以上add_cunstom_command的两种用法都使用了COMMAND ${CMAKE_COMMAND} -E,这是使用了cmake内置的[命令]{https://cmake.org/cmake/help/latest/manual/cmake.1.html#run-a-command-line-tool}

运用

例如生成protobuf的文件, 需要自定义方法

    # output files:
    FOREACH (src ${proto_srcs})
        get_filename_component(base_name ${src} NAME_WE)
        get_filename_component(path_name ${src} PATH)

        set(src "${base_name}.proto")
        set(cpp "${base_name}.pb.cc")
        set(hpp "${base_name}.pb.h")
        set(grpc_cpp "${base_name}.grpc.pb.cc")
        set(grpc_hpp "${base_name}.grpc.pb.h")

        # custom command.
        add_custom_command(
            OUTPUT ${proto_cpp_dist}/${cpp} ${proto_cpp_dist}/${hpp} ${proto_hpp_dist}/${hpp}
              ${proto_cpp_dist}/${grpc_cpp} ${proto_cpp_dist}/${grpc_hpp}
            COMMAND ${PROTOBUF_PROTOC_EXECUTABLE}
            ARGS ${OUTPUT_PATH}
              --grpc_out ${proto_cpp_dist}
              --plugin=protoc-gen-grpc=${GRPC_CPP_PLUGIN}
              ${src} 
            DEPENDS ${src}
            COMMAND ${CMAKE_COMMAND}
            ARGS -E copy_if_different ${proto_cpp_dist}/${hpp} ${proto_hpp_dist}/${hpp}
            COMMAND ${CMAKE_COMMAND}
            ARGS -E copy_if_different  ${proto_cpp_dist}/${grpc_hpp} ${proto_hpp_dist}/${grpc_hpp}
            WORKING_DIRECTORY ${path_name}
            COMMENT "${PROTOBUF_PROTOC_EXECUTABLE} --cpp_out=${proto_cpp_dist} ${src}"
            )

        LIST(APPEND output ${proto_cpp_dist}/${cpp})
    ENDFOREACH()

单独使用, 编译触发

这个是当项目中有add_library或者add_excutable目标时可以在编译目标文件前/链接前/编译后触发

add_custom_command(TARGET <target>
                   PRE_BUILD | PRE_LINK | POST_BUILD
                   COMMAND command1 [ARGS] [args1...]
                   [COMMAND command2 [ARGS] [args2...] ...]
                   [BYPRODUCTS [files...]]
                   [WORKING_DIRECTORY dir]
                   [COMMENT comment]
                   [VERBATIM] [USES_TERMINAL]
                   [COMMAND_EXPAND_LISTS])

cmake 逻辑表达式

cmake 使用第三方库

在项目中链接第三方库的方法都是'target_include_directories' 和 'target_link_library', 前提引入第三方包. 而查找可以使用find_package

find_package找包

find_package分为Module和Config两种方式

Module方式

find_package先在'/usr/share/cmake/Modules/Find/'下添加FindXXX.cmake文件,以及自定义路径(CMAKE_MODULE_PATH)下查找 然后在项目的CMakeList.txt中使用find_package(), 然后可以在链接的时候使用第三方库

find_package()

Config模式

当find_package找不到FindXXX.cmake文件,则会找 - Config.cmake - -config.cmake

如果第三方项目支持cmake, 那么先通过cmake编译和安装到环境或者docker环境,这时会在'/usr/lib/cmake//'下添加上述文件

安装FindXXX.cmake文件

当没有FindXXX.cmake时,可以使用安装包管理工具安装cmake-extra包, 可能找到需要的

$ pacman -S extra-cmake-modules

然后执行下面的命令,可以看到大量的'Find*.cmake'文件

ls /usr/share/cmake-3.20/Modules/

自定义FindXXX.cmake文件

如果上述方式都不行,那么需要自己写FindXXX.cmake,放到CMAKE_MODULE_PATH下
例如在项目根目录创建文件夹cmake_module, 再使用set(CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/cmake_module)来指定module的路径
最后在'cmake_module'下创建FindXXX.cmake结尾的文件,这个文件用来写找header和lib规则, 内容大致为

find_path(Grpc_INCLUDE_DIR grpc++/grpc++.h)
mark_as_advanced(Grpc_INCLUDE_DIR)

find_library(Grpc++_LIBRARY NAMES grpc++ grpc++-1-dll)
mark_as_advanced(Grpc++_LIBRARY)

有这个文件之后,可以在项目的cmake中直接使用find_package()

源代码编译链接

将第三方库源码放到项目指定目录如third

  1. 放到third目录并可以使用git submodule管理
  2. 在thrid目录添加CMakeList.txt,在其中添加目标,已备在项目中链接
    # for gsl-lite target
    add_library(gsl-lite INTERFACE)
    target_include_directories(gsl-lite SYSTEM INTERFACE ${CMAKE_CURRENT_SOURCE_DIR}/gsl-lite/include)
    

FetchContent 自动源代码链接

cmake3.11之后,可以使用这个办法来自动拉取网上的库,并可以直接在自己的项目中使用

# NOTE: This example uses cmake version 3.14 (FetchContent_MakeAvailable).
# Since it streamlines the FetchContent process
cmake_minimum_required(VERSION 3.14)

include(FetchContent)

# In this example we are picking a specific tag.
# You can also pick a specific commit, if you need to.
FetchContent_Declare(GSL
    GIT_REPOSITORY "https://github.com/microsoft/GSL"
    GIT_TAG "v3.1.0"
)

FetchContent_MakeAvailable(GSL)

# Now you can link against the GSL interface library
add_executable(foobar)

# Link against the interface library (IE header only library)
target_link_libraries(foobar PRIVATE GSL)

cmake使用openssl存在问题

因为openssl不用cmake,也就没有.cmake文件, 导致项目配置失败

 Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the
  system variable OPENSSL_ROOT_DIR (missing: OPENSSL_LIBRARIES
  OPENSSL_INCLUDE_DIR)
后面发现它是使用package_config方式
#/usr/local/lib/pkgconfig/openssl.pc
prefix=/usr
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${prefix}/include

Name: OpenSSL
Description: Secure Sockets Layer and cryptography libraries and tools
Version: 1.1.1k
Requires: libssl libcrypto

这种情况除了通过cmake_module来解决之外,还可以通过指定pc文件的路径

cmake -DOPENSSL_ROOT_DIR=/usr/local/ 

ExternalProject_Add

这个不常用

find_package 和 find_library 区别

find_library 是cmake的底层方法,在find_path指定的目录下查找库文件
而find_package 使用了find_library的来找库文件,而且find_package在找到目标后,会定义一些变量,如下面的'Findlibproxy.cmake'文件头

# - Try to find libproxy
# Once done this will define
#
#  LIBPROXY_FOUND - system has libproxy
#  LIBPROXY_INCLUDE_DIR - the libproxy include directory
#  LIBPROXY_LIBRARIES - libproxy library
#
# Copyright (c) 2010, Dominique Leuenberger
#
# Redistribution and use is allowed according the license terms
# of libproxy, which this file is integrated part of.

# Find proxy.h and the corresponding library (libproxy.so)
FIND_PATH(LIBPROXY_INCLUDE_DIR proxy.h )
FIND_LIBRARY(LIBPROXY_LIBRARIES NAMES proxy )

当找到libproxy.so的时候,LIBPROXY_FOUND被设置为TRUE等

shell 笔记

shell 是unix-like系统下,用户与系统交互的媒介,用来解析用户的输入并调用系统函数。 而shell的实现有常见的bash,zsh,ksh等, 他们实现有很多差别,但bash最为通用

bash 模式拓展

bash 字符串操作

bash 数组操作

环境变量

测试程序定时获取和打印环境变量

int main() {
  while (1) {
    char *env = getenv("TEST_ENV");
    printf("env: %s\n", env);
    sleep(5);
  }
}

通过bash来修改环境变量

#test.sh
export TEST_ENV=TEST
./a.out
export TEST_ENV=NNN

执行test.sh,c程序没有更新环境变量, 所以环境变量不会变化。

env: TEST
env: TEST
env: TEST
#include <stdio.h>

extern char **environ;

int main() {
  char **var;
  for (var = environ; *var != NULL; ++var) {
    printf("%s\n", *var);
  }
}

set unset

前言

自使用arch以来,一直在用urxvt, 它简洁,轻量,但不可否认的有问题,比如中文输入模式长时间时会无法输入中文,配置麻烦, 需要在启动脚本配置.xresource
这里要记一下自己的urxvt的配置以做备份

使用urxvt的主要功能

urxvt 非常简洁的tab功能,支持多路复用以及右键菜单格式化字符串,而且支持假透明,非常轻量。

urxvt不够现代化,不是开箱即用的,需要如下修改: * tab功能需要修改perl的包,因为默认情况下不支持切换tab * 不支持icon, 需要在配置文件手动指定icon的位置 * tab功能需要额外启动参数,所以顺便编一个desktop启动文件

perl修改

复制/usr/lib/perl/ext/tabbed到用户目录~/.urxvt/ext/, 修改tab_key_press函数如下

# if ($keysym == 0xff51 || $keysym == 0xff53)  表示使用ctrl+shift 和方向键来移动tab
sub tab_key_press {
   my ($self, $tab, $event, $keysym, $str) = @_;

   if ($event->{state} & urxvt::ShiftMask && !($event->{state} & urxvt::ControlMask) ) {
      if ($keysym == 0xff51 || $keysym == 0xff53) {
         my ($idx) = grep $self->{tabs}[$_] == $tab, 0 .. $#{ $self->{tabs} };

         --$idx if $keysym == 0xff51;
         ++$idx if $keysym == 0xff53;

         $self->make_current ($self->{tabs}[$idx % @{ $self->{tabs}}]);

         return 1;
      } elsif ($keysym == 0xff54) {
         $self->new_tab;

         return 1;
      }
   }elsif ($event->{state} & urxvt::ControlMask && $event->{state} & urxvt::ShiftMask) {
      if ($keysym == 0xff51 || $keysym == 0xff53) {
         my ($idx1) = grep $self->{tabs}[$_] == $tab, 0 .. $#{ $self->{tabs} };
         my  $idx2  = ($idx1 + ($keysym == 0xff51 ? -1 : +1)) % @{ $self->{tabs} };

         ($self->{tabs}[$idx1], $self->{tabs}[$idx2]) =
            ($self->{tabs}[$idx2], $self->{tabs}[$idx1]);

         $self->make_current ($self->{tabs}[$idx2]);

         return 1;
      }
   }

   ()
}

urxvt 启动文件

创建启动文件,使其默认为tab模式 '.local/share/applications/urxvtq.desktop'

[Desktop Entry]
Version=1.0
Name=urxvtq
Comment=An unicode capable rxvt clone
Exec=urxvt -pe tabbed
Icon=utilities-terminal
Terminal=false
Type=Application
Categories=System;TerminalEmulator;

urxvt 配置

创建如下的文件,并要在合适的启动脚本里添加一行[ -f "$HOME/.Xresources" ] && xrdb -merge "$HOME/.Xresources"

!!$HOME/.Xresources

!! dbi
Xft.dpi:98

/* Couleurs Tango */

!! 下划线色
URxvt.colorUL:  #87afd7
URxvt.colorBD:  white
URxvt.colorIT:  green

!! tab 配色
URxvt.tabbed.tabbar-fg: 2
URxvt.tabbed.tabbar-bg: 0
URxvt.tabbed.tab-fg:    3
URxvt.tabbed.tab-bg:    2
URxvt.tabbed.tabren-bg: 3
URxvt.tabbed.tabdiv-fg: 8
URxvt.tabbed.tabsel-fg: 1
URxvt.tabbed.tabsel-bg: 8

!! fake transparent
URxvt.transparent: true
URxvt.shading:     10
URxvt.fading:      40
!! font
URxvt.font:        xft:Monospace,xft:Awesome:pixelsize=14
URxvt.boldfont:    xft:Monospace,xft:Awesome:style=Bold:pixelsize=16

!! scroll behavior
URxvt.scrollBar:         false
URxvt.scrollTtyOutput:   false
URxvt.scrollWithBuffer:  true
URxvt.scrollTtyKeypress: true

!! addtional
URxvt.internalBorder:     0
URxvt.cursorBlink: true
URxvt.saveLines:          2000
URxvt.mouseWheelScrollPage:             false

! Restore Ctrl+Shift+(c|v)
URxvt.keysym.Shift-Control-V: eval:paste_clipboard
URxvt.keysym.Shift-Control-C: eval:selection_to_clipboard
URxvt.iso14755: false
URxvt.iso14755_52: false

! alt+s 搜索
URxvt.perl-ext:   default,matcher,searchable-scrollback
URxvt.keysym.M-s: searchable-scrollback:start

! url match 问题是tab模式下不支持跳转浏览器
URxvt.url-launcher:       /usr/bin/firefox
URxvt.matcher.button:     1


URxvt.termName:         xterm-256color
URxvt.iconFile:     /usr/share/icons/gnome/32x32/apps/gnome-terminal-icon.png
! fast key
URxvt.keysym.Control-Up:     \033[1;5A
URxvt.keysym.Control-Down:   \033[1;5B
URxvt.keysym.Control-Left:   \033[1;5D
URxvt.keysym.Control-Right:  \033[1;5C

最后

用了kitty就不会有输入法的问题,字体也很丰富, 推荐现代化模拟终端kitty