Select Page
Deep Live Cam-簡單易用,被遮擋也沒關係的即時換臉

Deep Live Cam-簡單易用,被遮擋也沒關係的即時換臉

用有多張臉,即時更換人臉的開源軟體,而且有綠色直接使用版本,已經幫忙把環境都打包好了,給懶人使用,支援windows、MAC、GPU

必要條件

Git 原始碼

https://github.com/hacksider/Deep-Live-Cam.git

下載模型

  1. GFPGANv1.4
  2. inswapper_128.onnx (Note: Use this replacement version if an issue occurs on your computer)

並且將這兩個檔案放在 models 的目錄下

安裝相關依賴

pip install -r requirements.txt

參考資料

https://github.com/hacksider/Deep-Live-Cam

GraphRAG 使用本地端的 Ollama

GraphRAG圖像檢索增強生成(Graph Retrieval-Augmented Generation,GraphRAG)超好用,但也超級貴,超級花錢,想要省錢的話,就要用本地端的服務如(Ollama),要用的話,可以按照下面的步驟處理,前提是你已經可以使用 OpenAI 版本的 GraphRAG 了,本篇是要把 OpenAI 改成 Ollama

講在前面

要先設定好 GraphRAG

下載以及安裝好 Ollama

安裝 ollama 的 Python 套件

pip install ollama

修改原先的 setting.yaml 檔案

把舊的 yaml 檔案改成 ollama 的設定檔,新的設定檔案如下

encoding_model: cl100k_base
skip_workflows: []
llm:
  api_key: ${GRAPHRAG_API_KEY}
  type: openai_chat # or azure_openai_chat
  # model: gpt-4o-mini
  model_supports_json: true # recommended if this is available for your model.
  # max_tokens: 4000
  # request_timeout: 180.0
  # api_base: https://<instance>.openai.azure.com
  # api_version: 2024-02-15-preview
  # organization: <organization_id>
  # deployment_name: <azure_model_deployment_name>
  # tokens_per_minute: 150_000 # set a leaky bucket throttle
  # requests_per_minute: 10_000 # set a leaky bucket throttle
  # max_retries: 10
  # max_retry_wait: 10.0
  # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
  # concurrent_requests: 25 # the number of parallel inflight requests that may be made

  # ollama api_base
  api_base: http://localhost:11434/v1
  model: llama3

parallelization:
  stagger: 0.3
  # num_threads: 50 # the number of threads to use for parallel processing

async_mode: threaded # or asyncio

embeddings:
  ## parallelization: override the global parallelization settings for embeddings
  async_mode: threaded # or asyncio
  llm:
    api_key: ${GRAPHRAG_API_KEY}
    type: openai_embedding # or azure_openai_embedding
    # model: text-embedding-3-small
    
    # ollama
    model: nomic-embed-text 
    api_base: http://localhost:11434/api

    # api_base: https://<instance>.openai.azure.com
    # api_version: 2024-02-15-preview
    # organization: <organization_id>
    # deployment_name: <azure_model_deployment_name>
    # tokens_per_minute: 150_000 # set a leaky bucket throttle
    # requests_per_minute: 10_000 # set a leaky bucket throttle
    # max_retries: 10
    # max_retry_wait: 10.0
    # sleep_on_rate_limit_recommendation: true # whether to sleep when azure suggests wait-times
    # concurrent_requests: 25 # the number of parallel inflight requests that may be made
    # batch_size: 16 # the number of documents to send in a single request
    # batch_max_tokens: 8191 # the maximum number of tokens to send in a single request
    # target: required # or optional
  
chunks:
  size: 300
  overlap: 100
  group_by_columns: [id] # by default, we don't allow chunks to cross documents
    
input:
  type: file # or blob
  file_type: text # or csv
  base_dir: "input"
  file_encoding: utf-8
  file_pattern: ".*\\.txt$"

cache:
  type: file # or blob
  base_dir: "cache"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

storage:
  type: file # or blob
  base_dir: "output/${timestamp}/artifacts"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

reporting:
  type: file # or console, blob
  base_dir: "output/${timestamp}/reports"
  # connection_string: <azure_blob_storage_connection_string>
  # container_name: <azure_blob_storage_container_name>

entity_extraction:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/entity_extraction.txt"
  entity_types: [organization,person,geo,event]
  max_gleanings: 0

summarize_descriptions:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/summarize_descriptions.txt"
  max_length: 500

claim_extraction:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  # enabled: true
  prompt: "prompts/claim_extraction.txt"
  description: "Any claims or facts that could be relevant to information discovery."
  max_gleanings: 0

community_report:
  ## llm: override the global llm settings for this task
  ## parallelization: override the global parallelization settings for this task
  ## async_mode: override the global async_mode settings for this task
  prompt: "prompts/community_report.txt"
  max_length: 2000
  max_input_length: 8000

cluster_graph:
  max_cluster_size: 10

embed_graph:
  enabled: true # if true, will generate node2vec embeddings for nodes
  # num_walks: 10
  # walk_length: 40
  # window_size: 2
  # iterations: 3
  # random_seed: 597832

umap:
  enabled: true # if true, will generate UMAP embeddings for nodes

snapshots:
  graphml: true
  raw_entities: true
  top_level_nodes: true

local_search:
  # text_unit_prop: 0.5
  # community_prop: 0.1
  # conversation_history_max_turns: 5
  # top_k_mapped_entities: 10
  # top_k_relationships: 10
  # max_tokens: 12000

global_search:
  # max_tokens: 12000
  # data_max_tokens: 12000
  # map_max_tokens: 1000
  # reduce_max_tokens: 2000
  # concurrency: 32

其中修改 llm 區塊

修改 model: llama3

加入 api_base: http://localhost:11434/v1

修改 embeddings 區塊

model: nomic-embed-text

api_base: http://localhost:11434/api

修改 GraphRAG 的程式碼

除了設定好 setting.yaml 以外,程式碼也要修改成可以支持 ollama 的程式碼,有兩處要改,可以用以下現成的程式碼

  1. C:\Users\xxx\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\llm\openai\openai_embeddings_llm.py

加入 ollama setting 區塊

# Copyright (c) 2024 Microsoft Corporation.
# Licensed under the MIT License

"""The EmbeddingsLLM class."""

from typing_extensions import Unpack

from graphrag.llm.base import BaseLLM
from graphrag.llm.types import (
    EmbeddingInput,
    EmbeddingOutput,
    LLMInput,
)
import ollama

from .openai_configuration import OpenAIConfiguration
from .types import OpenAIClientTypes


class OpenAIEmbeddingsLLM(BaseLLM[EmbeddingInput, EmbeddingOutput]):
    """A text-embedding generator LLM."""

    _client: OpenAIClientTypes
    _configuration: OpenAIConfiguration

    def __init__(self, client: OpenAIClientTypes, configuration: OpenAIConfiguration):
        self.client = client
        self.configuration = configuration

    async def _execute_llm(
        self, input: EmbeddingInput, **kwargs: Unpack[LLMInput]
    ) -> EmbeddingOutput | None:
        args = {
            "model": self.configuration.model,
            **(kwargs.get("model_parameters") or {}),
        }
        # openai setting
        # embedding = await self.client.embeddings.create(
        #    input=input,
        #    **args,
        #)
        #return [d.embedding for d in embedding.data]

        # ollama setting
        embedding_list=[]
        for inp in input:
            embedding = ollama.embeddings(model='qwen:7b', prompt=inp) #如果要改模型, 模型的名字要換掉
            embedding_list.append(embedding['embedding'])
        return embedding_list        
  1. C:\Users\xxx\anaconda3\envs\GraphRAG\Lib\site-packages\graphrag\query\llm\oai\embedding.py

加入 ollama setting 區塊,並且關閉 openai setting 區塊即可

# Copyright (c) 2024 Microsoft Corporation.
# Licensed under the MIT License

"""OpenAI Embedding model implementation."""

import asyncio
from collections.abc import Callable
from typing import Any

import numpy as np
import tiktoken
from tenacity import (
    AsyncRetrying,
    RetryError,
    Retrying,
    retry_if_exception_type,
    stop_after_attempt,
    wait_exponential_jitter,
)

from graphrag.query.llm.base import BaseTextEmbedding
from graphrag.query.llm.oai.base import OpenAILLMImpl
from graphrag.query.llm.oai.typing import (
    OPENAI_RETRY_ERROR_TYPES,
    OpenaiApiType,
)
from graphrag.query.llm.text_utils import chunk_text
from graphrag.query.progress import StatusReporter
import ollama

class OpenAIEmbedding(BaseTextEmbedding, OpenAILLMImpl):
    """Wrapper for OpenAI Embedding models."""

    def __init__(
        self,
        api_key: str | None = None,
        azure_ad_token_provider: Callable | None = None,
        model: str = "text-embedding-3-small",
        deployment_name: str | None = None,
        api_base: str | None = None,
        api_version: str | None = None,
        api_type: OpenaiApiType = OpenaiApiType.OpenAI,
        organization: str | None = None,
        encoding_name: str = "cl100k_base",
        max_tokens: int = 8191,
        max_retries: int = 10,
        request_timeout: float = 180.0,
        retry_error_types: tuple[type[BaseException]] = OPENAI_RETRY_ERROR_TYPES,  # type: ignore
        reporter: StatusReporter | None = None,
    ):
        OpenAILLMImpl.__init__(
            self=self,
            api_key=api_key,
            azure_ad_token_provider=azure_ad_token_provider,
            deployment_name=deployment_name,
            api_base=api_base,
            api_version=api_version,
            api_type=api_type,  # type: ignore
            organization=organization,
            max_retries=max_retries,
            request_timeout=request_timeout,
            reporter=reporter,
        )

        self.model = model
        self.encoding_name = encoding_name
        self.max_tokens = max_tokens
        self.token_encoder = tiktoken.get_encoding(self.encoding_name)
        self.retry_error_types = retry_error_types

    def embed(self, text: str, **kwargs: Any) -> list[float]:
        """
        Embed text using OpenAI Embedding's sync function.

        For text longer than max_tokens, chunk texts into max_tokens, embed each chunk, then combine using weighted average.
        Please refer to: https://github.com/openai/openai-cookbook/blob/main/examples/Embedding_long_inputs.ipynb
        """
        token_chunks = chunk_text(
            text=text, token_encoder=self.token_encoder, max_tokens=self.max_tokens
        )
        chunk_embeddings = []
        chunk_lens = []
        for chunk in token_chunks:
            try:
                # openai setting
                #embedding, chunk_len = self._embed_with_retry(chunk, **kwargs)
                #chunk_embeddings.append(embedding)
                #chunk_lens.append(chunk_len)

                # ollama setting
                embedding = ollama.embeddings(model="nomic-embed-text", prompt=chunk)['embedding'] #如果要替換嵌入模型, 請修改此處的模型名稱
                chunk_lens.append(chunk)
                chunk_embeddings.append(embedding)
                chunk_lens.append(chunk_lens)                
            # TODO: catch a more specific exception
            except Exception as e:  # noqa BLE001
                self._reporter.error(
                    message="Error embedding chunk",
                    details={self.__class__.__name__: str(e)},
                )

                continue
        #chunk_embeddings = np.average(chunk_embeddings, axis=0, weights=chunk_lens)
        #chunk_embeddings = chunk_embeddings / np.linalg.norm(chunk_embeddings)
        return chunk_embeddings.tolist()

    async def aembed(self, text: str, **kwargs: Any) -> list[float]:
        """
        Embed text using OpenAI Embedding's async function.

        For text longer than max_tokens, chunk texts into max_tokens, embed each chunk, then combine using weighted average.
        """
        token_chunks = chunk_text(
            text=text, token_encoder=self.token_encoder, max_tokens=self.max_tokens
        )
        chunk_embeddings = []
        chunk_lens = []
        embedding_results = await asyncio.gather(*[
            self._aembed_with_retry(chunk, **kwargs) for chunk in token_chunks
        ])
        embedding_results = [result for result in embedding_results if result[0]]
        chunk_embeddings = [result[0] for result in embedding_results]
        chunk_lens = [result[1] for result in embedding_results]
        chunk_embeddings = np.average(chunk_embeddings, axis=0, weights=chunk_lens)  # type: ignore
        chunk_embeddings = chunk_embeddings / np.linalg.norm(chunk_embeddings)
        return chunk_embeddings.tolist()

    def _embed_with_retry(
        self, text: str | tuple, **kwargs: Any
    ) -> tuple[list[float], int]:
        try:
            retryer = Retrying(
                stop=stop_after_attempt(self.max_retries),
                wait=wait_exponential_jitter(max=10),
                reraise=True,
                retry=retry_if_exception_type(self.retry_error_types),
            )
            for attempt in retryer:
                with attempt:
                    embedding = (
                        self.sync_client.embeddings.create(  # type: ignore
                            input=text,
                            model=self.model,
                            **kwargs,  # type: ignore
                        )
                        .data[0]
                        .embedding
                        or []
                    )
                    return (embedding, len(text))
        except RetryError as e:
            self._reporter.error(
                message="Error at embed_with_retry()",
                details={self.__class__.__name__: str(e)},
            )
            return ([], 0)
        else:
            # TODO: why not just throw in this case?
            return ([], 0)

    async def _aembed_with_retry(
        self, text: str | tuple, **kwargs: Any
    ) -> tuple[list[float], int]:
        try:
            retryer = AsyncRetrying(
                stop=stop_after_attempt(self.max_retries),
                wait=wait_exponential_jitter(max=10),
                reraise=True,
                retry=retry_if_exception_type(self.retry_error_types),
            )
            async for attempt in retryer:
                with attempt:
                    embedding = (
                        await self.async_client.embeddings.create(  # type: ignore
                            input=text,
                            model=self.model,
                            **kwargs,  # type: ignore
                        )
                    ).data[0].embedding or []
                    return (embedding, len(text))
        except RetryError as e:
            self._reporter.error(
                message="Error at embed_with_retry()",
                details={self.__class__.__name__: str(e)},
            )
            return ([], 0)
        else:
            # TODO: why not just throw in this case?
            return ([], 0)

參考資料

Flux AI – 終於可以在圖片上產出文字了

免費使用 Flux AI 的方法

  1. Huggingface
  2. Seaart
  3. Glif
  4. FluxPro

在自己的電腦中使用 Flux AI

採用 flux pro api

API 文件

在自己的電腦安裝

Flux 建議用 Pyhton 3.10 ,可以去 GitHub 下載並且安裝,但只能使用 dev (開發版) 和 Schnell (速度版)

cd $HOME && git clone https://github.com/black-forest-labs/flux
cd $HOME/flux
python3.10 -m venv .venv
source .venv/bin/activate
pip install -e ".[all]"

模型連結如下

FLUX 1 schnell

FLUX 1 Dev

安裝好模型和程式後,設定如下

export FLUX_SCHNELL=<path_to_flux_schnell_sft_file>
export FLUX_DEV=<path_to_flux_dev_sft_file>
export AE=<path_to_ae_sft_file>

使用的方法有兩種,一個是開啟交互介面

python -m flux --name <name> --loop

另一個是直接在 CLI 介面上產圖

python -m flux --name <name> \
  --height <height> --width <width> \
  --prompt "<prompt>"

參數說明

  • --name: 模型名稱 “flux-schnell”, “flux-dev”)
  • --device: 用CPU還是GPU運算 (default: “cuda” if available, otherwise “cpu”)
  • --offload: 模型未被使用時,將其從 GPU 卸載到 CPU。這樣做的目的是節省 GPU 的記憶體資源,特別是在模型不需要時,減少對 GPU 記憶體的佔用。同時,當模型需要再次使用時,它會從 CPU 重新加載到 GPU 上。
  • --share: 對外開放你的連結

其中 <name> 要代入模型的名稱,範例如下,

python demo_gr.py --name flux-schnell --device cuda --prompt "a girl"

將你的Python程式碼改成 web api

現在AI的時代,實在是離不開python,有時候想要快速的驗證程式設計,並且對外服務,給外部的人測試,這時候可以考慮把在 CLI 執行的 python code ,改成 web api,讓外部的人測試看看,改法如下

步驟 1: 安裝 Flask

pip install Flask

步驟 2 : 建立一個 Web APP

可以建立一個名為 webapi.py 的檔案,並且輸入以下程式碼,這樣就可以簡單地把 GraphRAG 的服務對外

from flask import Flask, request, jsonify
import subprocess
import shlex

app = Flask(__name__)

@app.route('/query', methods=['POST'])
def query():
    # 获取请求中的问题
    data = request.json
    question = data.get('question')
    
    if not question:
        return jsonify({'error': 'No question provided'}), 400
    
    # 构建 CLI 命令
    command = f"python -m graphrag.query --root ./ragtest --method local \"{question}\""
    # 安全地处理命令
    args = shlex.split(command)
    
    # 执行命令
    try:
        result = subprocess.run(args, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
        response = result.stdout
        # 假设输出中包含 "SUCCESS:" 和我们需要的答案
        if "SUCCESS:" in response:
            answer = response.split("SUCCESS:")[1].strip()  # 取得成功后的文本作为答案
            return jsonify({'answer': answer})
        else:
            return jsonify({'error': 'Failed to get a valid response from the CLI tool'}), 500
    except subprocess.CalledProcessError as e:
        return jsonify({'error': str(e)}), 500

if __name__ == '__main__':
    app.run(debug=True, port=5000)

步驟 3 : 開啟服務

python webapi.py

步驟 4 : 使用 API

curl -X POST http://localhost:5000/query -H "Content-Type: application/json" -d "{\"question\": \"新修正之勞工特別休假日數有多少?\"}"

延伸閱讀

用 wpa_cli 控制你的 WIFI (無線網路)

用 wpa_cli 控制你的 WIFI (無線網路)

wpa_cli 是一個用於與 wpa_supplicant 交互的命令行界面工具,當然也支持在 command line 下直接使用命令控制 WIFI,可以用來管理無線網絡接口的設定和運行狀態。這個工具非常強大,支持多種操作,如掃描無線網絡、連接到網絡、變更設定等,而我都用python 透過 wpa_cli 來控制 WIFI。

首先掃描無線網路

打開終端,並且記得要用 root 權限來執行 wpa_cli

sudo wpa_cli -i wlan1 scan
sleep 5  # 給予一些時間來完成掃描
sudo wpa_cli -i wlan1 scan_results

回應如下圖

加一個網路配置

利用 ssid 和 paswword 來連線到一個已知的無線網路上,可以將下面的指令編寫成 wpa_cli_add_network.sh ,並且執行她,也可以直接使用,執行後取得的 network id 通常是一個 int 的數字,把她記起來,之後就可以透過這個 network id 來連接網路,和斷開網路

# 添加新的網絡配置
network_id=$(wpa_cli -i wlan1 add_network | awk '{print $NF}')

# 設置SSID和密碼
wpa_cli -i wlan1 set_network $network_id ssid '"你的SSID"'
wpa_cli -i wlan1 set_network $network_id psk '"你的密碼"'

# 啟用該網絡
wpa_cli -i wlan1 enable_network $network_id

# 保存配置
wpa_cli -i wlan1 save_config

連接網路以及斷開網路

# 重新連線
wpa_cli -i wlan1 reconnect

#斷開連線
wpa_cli -i wlan1 disconnect

刪除網路配置

wpa_cli -i wlan1 remove_network $network_id

檢查連線是否存在

wpa_cli -i wlan1 status

得到的輸出通常如下

其中,檢查是否有連線成功,可以看 wpa_state 是否為 COMPLETED

檢查已經存在的網路設定

sudo wpa_cli -i wlan1 list_networks

輸出如下

可以用 flags CURRENT 來檢查現在是連線到那一個 network ,也可以取得 network id ,並且用他來連線